From ESBs to API Portals, an Evolutionary Journey Part 2

In this article series we would like to build a case that API portals, with the Intel® API Manager and Intel® Expressway Service Gateway, powered by Mashery are representative examples, are the contemporary manifestations of the SOA movement that transformed IT in the early 2000s from IT as a cost center to an equal partner in a company’s  execution of a business strategy and revenue generation.  In the introductory article in Part 1 we discussed some of the business dynamics that led to cloud computing and the service  paradigm.  Let’s now take a closer look  at the SOA transformation in the big enterprise.

If we look at the Google  Webtrends graph for the term “SOA”, we can use the search popularity as an  indicator of the industry we can see that the interest in SOA peaked at around  2007, just as the interest on cloud computing started rising.  There was a brief burst of interest on this term at the end of 2012 which can be attributed at people looking for precedents in SOA as the industry moves to cloud services.

Figure 1. Google Webtrends graph for “SOA.”

Figure 2. Google Webtrends graph for “Cloud Computing.

The search rate for the term “cloud computing” actually peaked in 2011, but perhaps unlike SOA, the trend is not an indication of waning interest, but that the focus of interest has shifted to more specific aspects. See for instance the graphs for “Amazon AWS” and “OpenStack”.

Figure 3. Google Webtrends graph for “Amazon AWS.”

Figure 4. Google Webtrends graph for “OpenStack.”

SOA brought a discipline of  modularity that has been well known in the software engineering community for more than 30 years, but had been little applied in corporate-wide IT  projects.  The desired goal for SOA was  to attain a structural cost reduction in the delivery of IT services through  re-use and standardization.  These savings needed to be weighed against significant upfront costs for architecture and planning as well as from the reengineering effort for interoperability and security.  The expectation was a per  instance lower cost from reuse in spite of the required initial investment.

Traditionally, corporate applications have been deployed in stovepipes, as illustrated in Figure 5 below, one application per server or server tier hosting a complete solution stack.  Ironically, this trend was facilitated by the availability of low-cost Intel-based high volume servers starting fifteen years ago.  Under this system physical servers need to be procured, a process that takes anywhere from two weeks to six months depending on organizational policies and asset approvals in effect.  When the servers become available, they need to be configured and provisioned with an operating system, database software, middleware and the application. Multiple pipes are actually needed to support a running business.  For instance, Intel IT requires as many as 15 staging stovepipes to phase in an upgrade for the Enterprise Resource Planning (ERP) SAP application.  The large number of machines needed to support most any corporate application over its life cycle led to the condition affectionately called “server sprawl.”  In data centers housing thousands if not tens of thousands of servers it is not difficult to lose track of these assets, especially after project and staff turnover from repeated reorganizations and M&As.  This created another affectionate term: “zombies.”  These are forgotten servers from projects past, still powered up, and even running an application, but serving no business purpose.

Figure 5. Traditional Application Stovepipes vs. SOA.

With SOA, monolithic applications are broken into reusable, fungible services as shown on the left side of Figure 6 below.  Much in the same way server sprawl used to exist in data centers, so it was with software, with multiple copies deployed, burdening IT organizations with possibly unnecessary licensing costs, or even worse, with shelfware, that is, licenses paid for software never used.  As an example, in a stovepiped environment each application that requires the employee roster of a company, namely user accounts, phone directory, expense reporting, payroll would each require a full copy of the employee information database.  In addition to the expense of the extra copies, the logistics of keeping each copy synchronized would be complex.

What if instead the need to replicate the employee roster somehow it was possible to build a single copy where every application needing this information can “plug in” into this copy and use it a needed.  There are some complications: the appropriate access and security mechanisms need to be in place.  Locking mechanisms for updates need to be implemented to ensure integrity and consistency of the data.  However the expense of habilitating the database for concurrent access is still significantly less than the expense of maintaining several copies.

If we access this new single-copy employee database through Web Services technology, using either SOAP or REST, we have just created a “service”, the “employee roster service”.  If every application layer in a stack is
re-engineered as a service with possibly multiple users, the stacks in Figure 5 morph into a network as shown in the left part of Figure 6. The notion of service is recursive where most applications become composites of several services and services themselves are composites of services.  Any service can be n “application” if it exposes a user interface (UI) or a service proper if it exposes an API.  In fact a service can be both, exposing multiple UIs and APIs depending on the intended audience or target application: it is possible to have one API for corporate access and yet another one available to third party developers of mobile applications.

Applications structured to operate under this new service paradigm are said to follow a service oriented architecture, commonly known as SOA.  The transition to SOA created new sets of dynamics whose effects are still triggering change today.  For one thing, services are loosely coupled, meaning that as long as the terms of the service contract between the service consumer and the service provider does not change, one service instance can be easily replaced. This feature simplifies the logistics of deploying applications enormously: a service can be easily replaced to improve quality, or ganged together with a similar service to increase performance or throughput.  Essentially applications can be assembled from services as part of operational procedures.  This concept is called “late binding” of application components.

Historically binding requirements have been loosened up over time.  In earlier times most applications components had to be bound together at compile time.  This was really early binding.  Over time it became possible to combine precompiled modules using a linker tool and precompiled libraries.  With dynamically linked libraries it became possible to bind together binary objects at run time.  However this operation had to be done within a given operating system, and allowed only within strict version or release limits.

We can expect even more dynamic applications in the near future.  For instance, it is not hard to imagine self-configuring applications assembled on the fly and real time on demand using a predefined template.  In theory these applications could recreate themselves in any geographic region using locally sourced service components.

There are also business considerations driving the transformation dynamics of application components. Business organizations are subject to both headcount and budgetary constraints for capital expenses.  Under these restrictions it may be easier for an organization to convert labor and capital costs into monthly operational expenses by running their services on third party machines hiring Infrastructure as a Service (IaaS) cloud services, or take one step further and contract out the complete database package to implement the employee roster directly from a Software as a Service (SaaS) provider.  All kinds of variations are possible: the software service may be hosted on infrastructure from yet another party, contracted by either the SaaS provider or the end user organization.

The effect of the execution of this strategy is the externalization of some of the services as shown in the right hand side of Figure 6.  We call this type of evolution inside-out SOA where initially in-sourced service components get increasingly outsourced.

Figure 6. The transition from internal SOA to inside-out SOA.

As with any new approach, SOA transformation required an upfront investment, including the cost of reengineering the applications and breaking them up into service components, and in ensuring that new applications themselves or new capabilities were service ready or service capable.  The latter usually meant attaching an API to an application to make the application usable as a component for a higher-level application.

Implementation teams found the extra work under the SOA discipline disruptive and distracting.  Project participants resented the fact that while this extra work is for the “greater good” of the organization, it was not directly aligned with the goals of the project.  This is part of the cultural and behavioral aspects that a SOA program needs to deal with, which can be more difficult to orchestrate than the SOA technology itself. Most enterprises that took a long term approach and persisted in these efforts eventually reached a breakeven point where the extra implementation cost of a given project was balanced by the savings by reusing past projects.

This early experience also had another beneficial side effect that would pave the way to the adoption of cloud computing a few years later: the development of a data-driven, management by numbers ethic demanding quantifiable QoS and a priori service contracts also known as SLAs or service level agreements.

While the inside-out transformation just described had a significant impact in the architecture of enterprise IT, the demand for third party service components had an even greater economic impact on the IT industry as a whole, leading to the creation of new supply chains and with these supply chains, new business models.

Large companies, such as Netflix, Best Buy, Expedia, Dun & Bradstreet and The New York Times found that the inside-out transformative process was actually a two-way street.  These early adopters found that making applications
“composable” went beyond saving money; it actually helped them make money through the enablement of new revenue streams: the data and intellectual property that benefited internal corporate departments was also useful to external parties if not more.  For instance an entrepreneur providing a travel service to a corporate customer did not have to start from ground zero and making the large investment to establish a travel reservation system.  It was a lot simpler to link up to an established service such as Expedia.  In fact, for this upstart did not have to be bound by a single service: at this level it makes more economic sense to leverage a portfolio of services, in which case the value added of this service upstart is in finding the best choices from the portfolio.  This is a common pattern in product search services whose function is to find the lowest price across multiple stores, or in the case of a travel service, the lowest priced airfare.

The facilitation of the flow of information was another change agent for the industry.  There was no place to hide.  A very visible example is effect of these dynamics on the airline industry, and how it changed irrevocably, bringing up new efficiencies but also significant disruption.  The change empowered consumers, and some occupations such as travel agents and car salespeople were severely impacted.

Another trend that underlies the IT industry transformation around services is the “democratization” of the services themselves: The cost efficiencies gained not only lowered the cost of business for some expensive applications accessible only to large corporations with deep pockets; it made these applications affordable to smaller businesses, the market segment known as SMBs or Small and Medium Businesses.  The economic impact of this trend has been enormous, although hard to measure as it is still in process.  A third wave has already started, which is the industry’s ability to reduce the quantum for the delivery of IT services to make it affordable to individual consumers. This includes social media, as well as more traditional services such as email and storage services such as Dropbox. We will take a look at SMBs in Part 3.

From ESBs to API Portals, an Evolutionary Journey, Part 1

A number of analysts are beginning to suggest 2013 will likely signal the awakening of a long night in the IT industry that started with the beginning of the third millennium with the Internet crash. And just as recovery was around the corner, the financial crisis of 2008 dried the IT well once more. Both crises can be characterized as crises of demand. Just past 2000, the Y2K pipeline ran dry. Some argue that the problem was overstated, whereas others argue that the problem was solved just in time. In either case this event triggered a significant pullback in IT spending.

Faced with an existential threat after Y2K, the IT industry did not sit still. The main outcome from these lean years has been a significant increase in efficiency where the role of IT in companies with most advanced practices shifted from being a cost center to an active participant in the execution of corporate business strategy. Capabilities evolved from no accountability on resource utilization to efficient use of capital to nimble participant in a broad range of organizations and initiatives.  The second crisis reaffirmed the continuing need to do more in the face of shrinking budgets and very likely provided the impetus for the widespread adoption of cloud technology.

The state of the art today is epitomized by cloud computing under the service paradigm. From a historical perspective the current state of development for services is in its third iteration.

The early attempts came in various forms from different angles and as many vendors. The most prominent examples of this era came from application server and connectivity ISVs (independent software vendors) and from operating system vendors: Microsoft, IBM, TIBCO and various Unix vendors of that era. The main characteristic of this era that went roughly from 1995 to 2005 was single-vendor frameworks with vendors attempting to build ecosystems around their particular framework. This approach did not take off, partly because concerns in IT organizations about vendor lock-in and because the licensing costs for these solutions were quite high.

The second era was the era of SOA that lasted roughly from 2000 to 2010. The focus was to re-architect legacy IT applications from silo implementations to collections of service components working together. Most of the service components were internally sourced, resulting perhaps from the breaking former monoliths, and combined with a few non-core third party services. Vendors evolved their offerings so they would work well in this new environment. The technology transformation costs were still significant as well as the demand on practitioners’ skills.  Transformation projects required a serious corporate commitment in terms of deferrals to accommodate process reenginering, licensing fees and consulting costs. As a result, the benefits of SOA were available only to large companies. Small and medium businesses (SMBs) and individual consumers were left out of the equation.

Cloud technology drives the current incarnation for IT services following the crisis of 2008. Clouds notwithstanding, if we look at the physical infrastructure for data centers, it is not radically different from what it was five years ago, and there are plenty of data centers five years ago or older still in operation. However, the way these assets are organized and deployed is changing. Much in the same way credit or other people’s money drives advanced economies, with the cloud other people’s systems are driving the new IT economy.

Scaling a business often involves OPM (other people’s money), through partnerships or issuing of stock through IPOs (initial public offerings). These relationships are carried out within a legal framework that took hundreds of years to develop.

In the real world, scaling a computing system follows a similar approach, in the form of resource outsourcing, such as using other people’s systems or OPS. The use of OPS has a strong economic incentive: it does not make sense to spend millions of dollars in a large system for occasional use only.

Large scale projects, for instance a marketing campaign that requires significant amounts of computing power are peaky in the usage of infrastructure assets, usually start with small trial or development runs, with large runs far and few between. A large system that lies idle during development would be a waste of capital.

Infrastructure sharing across a pool of users increases the duty cycle of infrastructure assets through the sharing of these assets with multiple users.  Cloud service providers define the sandbox whereby a number of corporate or individual users have access to this infrastructure. Cloud computing increases the efficiency of capital use through resource pooling delivered through a service model.

The first step in the infrastructure transformation came in the form of server consolidation and virtualization technology. After consolidation became a mainstream IT practice, the trend has been toward the support of more dynamic behaviors. IaaS allows the sharing of physical assets not just within the corporate walls, but across multiple corporate customers. A cloud service provider takes a data center, namely $200 million asset and rents individual servers or virtual machines running inside them by the hour, much in the same way a jet leasing companies takes a $200 million asset, namely an airliner and leases it to an air carrier which otherwise would not be able to come up with the upfront capital expense. The air carrier turns around and sells seats to individual passengers, which is essentially renting a seat for the duration of one flight.

In other words, the evolution we are observing in the delivery of IT services is no different than the evolution that took place in other, more mature industries. The processes that we are observing with cloud computing and associated service delivery model is no different than the evolution of the transportation industry, to give one example.

As we will see in the next few articles, the changes brought by the cloud are actually less about technology and more about the democratization of the technology: the quantum for delivery used to be so large that only the largest companies could afford IT. Over the past few years IT became accessible to small and medium businesses and even individual consumers and developers: businesses can purchase email accounts by the mailbox with a monthly fee, and deployment models allow individual consumers to sign up for email accounts at no out of pocket cost. We will look at the evolution of the service model from a corporate privilege to mass availability. From a practical, execution perspective, products like the Intel® Expressway Service Gateway and Intel® API Manager were developed to support the life cycle of cloud enabled applications, and I’ll be pointing to aspects of these products to provide specific examples of the concepts discussed.

In the next article we’ll discuss the “big guns” services represented by the SOA movement in the first decade of the millennium.

Touchless Security for Hadoop – combining API Security and Hadoop

It sounds like a parlor trick, but one of the benefits of API centric de-facto standards  such as REST and JSON is they allow relatively seamless communication between software systems.

This makes it possible to combine technologies to instantly bring out new capabilities. In particular I want to talk about how an API Gateway can improve the security posture of a Hadoop installation without having to actually modify Hadoop itself. Sounds too good to be true? Read on.

Hadoop and RESTful APIs

Hadoop is mostly a behind the firewall affair, and APIs are generally used for exposing data or capabilities for other systems, users or mobile devices. In the case of Hadoop there are three main RESTful APIs to talk about. This list isn’t exhaustive but it covers the main APIs.

  1. WebHDFS – Offers complete control over files and directories in HDFS
  2. HBase REST API – Offers access to insert, create, delete, single/multiple cell values
  3. HCatalog REST API – Provides job control for Map/Reduce, Pig and Hive as well as to access and manipulate HCatalog DDL data

These APIs are very useful because anyone with an HTTP client can potentially manipulate data in Hadoop. This, of course, is like using a knife all-blade – it’s very easy to cut yourself. To take an example, WebHDFS allows RESTful calls for directory listings, creating new directories and files, as well as file deletion. Worse,  the default security model requires nothing more than inserting “root” into the HTTP call.

To its credit, most distributions of Hadoop also offer Kerberos SPNEGO authentication, but additional work is needed to support other types of authentication and authorization schemes, and not all REST calls that expose sensitive data (such as a list of files) are secured. Here are some of the other challenges:

  • Fragmented Enforcement – Some REST calls leak information and require no credentials
  • Developer Centric Interfaces – Full Java stack traces are passed back to callers, leaking system details
  • Resource Protection – The Namenode is a single point of failure and excessive WebHDFS activity may threaten the cluster
  • Consistent Security Policy – All APIs in Hadoop must be independently configured, managed and audited over time

This list is just a start, and to be fair, Hadoop is still evolving. We expect things to get better over time, but for Enterprises to unlock value from their “Big Data” projects now, they can’t afford to wait until security is perfect.

One model used in other domains is an API Gateway or proxy that sits between the Hadoop cluster and the client. Using this model, the cluster only trusts calls from the gateway and all potential API callers are forced to use the gateway. Further, the gateway capabilities are rich enough and expressive enough to perform the full depth and breadth of security for REST calls from authentication to message level security, tokenization, throttling, denial of service protection, attack protection and data translation. Even better, this provides a safe and effective way to expose Hadoop to mobile devices without worrying about performance, scalability and security.  Here is the conceptual picture:

Intel(R) Expressway API Manager and Intel Distribution of Apache Hadoop

In the previous diagram we are showing the Intel(R) Expressway API Manager acting as a proxy for WebHDFS, HBase and HCatalog APIs exposed from Intel’s Hadoop distribution. API Manager exposes RESTful APIs and also provides an out of the box subscription to Mashery to help evangelize APIs among a community of developers.

All of the policy enforcement is done at the HTTP layer by the gateway and the security administrator is free to rewrite the API to be more user friendly to the caller and the gateway will take care of mapping and rewriting the REST call to the format supported by Hadoop. In short, this model lets you provide instant Enterprise security for a good chunk of Hadoop capabilities without having to add a plug-in, additional code or a special distribution of Hadoop. So… just what can you do without touching Hadoop? To take WebHDFS as an example the following is possible with some configuration on the gateway itself:

  1. A gateway can lock-down the standard WebHDFS REST API and allow access only for specific users based on an Enterprise identity that may be stored in LDAP, Active Directory, Oracle, Siteminder, IBM or Relational Databases.
  2. A gateway provides additional authentication methods such as X.509 certificates with CRL and OCSP checking, OAuth token handling, API keys support, WS-Security and SSL termination & acceleration for WebHDFS API calls. The gateway can expose secure versions of the WebDHFS API for external access
  3. A gateway can improve on the security model used by WebHDFS which carries identities in HTTP query parameters, which are more susceptible to credential leakage compared to a security model based on HTTP headers. The gateway can expose a variant of the WebHDFS API that expects credentials in the HTTP header and seamlessly maps this to the WebHDFS internal format
  4. The gateway workflow engine can maps a single function REST call into multiple WebHDFS calls. For example, the WebHDFS REST API requires two separate HTTP calls for file creation and file upload. The gateway can expose a single API for this that handles the sequential execution and error handling, exposing a single function to the user
  5. The gateway can strip and redact Java exception traces carried in the WebHDFS REST API responses ( for instance, JSON responses may carry org.apache.hadoop.security.AccessControlException.* which can spill details beneficial to an attacker
  6. The gateway can throttle and rate shape WebHDFS REST requests which can protect the Hadoop cluster from resource consumption from excessive HDFS writes, open file handles and excessive  create, read, update and delete operations which might impact a running job.

This list is just the start, API manager can also perform selective encryption and data protection (such as PCI tokenization or PII format preserving encryption) on data as it is inserted or deleted from the Hadoop cluster, all by sitting in-between the caller and the cluster. So the parlor trick here is really moving the problem from trying to secure hadoop from the inside out to moving and centralizing security to the enforcement point. If you are looking for a way to expose “Big Data” outside the cluster, an the API Gateway model may be worth some investigation!

Blake

Mobile APIs for Healthcare

Next week I am participating in a webinar called Mobile Optimized Healthcare API Programs, from a technical perspective we’ll be looking at some interesting integration between Intel’s Security Gateway and Mashery. From a healthcare standpoint, the discussion looks at what new kinds of use cases are possible in this ecosystem.

For as much hype that financial services and other sectors get vis a vis security, the healthcare security problem set really is harder than the rest. At the same time, there are dramatic benefits from enabling mobile integration for healthcare, it benefits your number one asset: you. Whether its Fit Bit, Nike+, or just healthcare pros with iPads, mobile is uniquely suited to health and wellness related applications. But what is missing is APIs and integration to deliver on the use cases.

The webinar looks at the following concerns:

  • Gateway security patterns to safely repackage legacy data and services as APIs – in short enable access not attackers.
  • How to construct, share, and promote APIs to developers using API workshops and branded portals – make it easy for developers to do things right
  • How to build a mobile-optimized back end that securely exposes enterprise assets via standard internet protocols (e.g. OAuth & JSON) – what comprises the mobile DMZ? How is it similar and different than a plain, old Web DMZ?

As much as I enjoy middleware, security and protocols, what is most interesting about healthcare is the new types of use cases that bring all the technology together. I guess that is as it should be. Still as a technologist its neat to see after all these years that Web services and Secuity Gateways play a leading role in the leading edge technology deployments today.

Making a Mobile DMZ is subtly different than old school Web DMZs. Most of the principles remain the same but the implementation is different. In addition, there are new concerns to handle such as session management, token resolution and asynchronous protocols which function differently on mobile apps than web. In the webinar, we’ll do a deep dive on these topics and what it might mean for your organization

By Gunnar Peterson – this post originally appeared on the 1Raindrop blog

New PCI DSS Cloud Computing Guidelines – Are you compliant?

This month the Cloud SIG of the PCI Security Standards Councilreleased supplemental guidelines covering cloud computing. We’re happy to see APIs included as a recognized attack surface.  As this document makes clear, responsibility for compliance for cloud-hosted data and services is shared between the client and the provider.  API providers moving to the cloud should pay close attention to this document:  Section 6.5.5 covers Security of Interfaces and APIs, while Appendix D covers implementation considerations that include API-related topics.  For cloud-hosted systems, an API gateway can simplify implementation, secure PII and PAN data in motion, provide compliance and ensure auditability in these areas.

The last paragraph of Section 6.5.5 reads:

APIs and other public interfaces should be designed to prevent both accidental misuse and malicious attempts to bypass security policy. Strong authentication and access controls, strong cryptography, and real-time monitoring are examples of controls that should be in place to protect these interfaces.

While Appendix D: PCI DSS Implementation Considerations asks:

  • Are API interfaces standardized?
  • Are APIs configured to enforce strong cryptography and authentication?
  • How are APIs and web services protected from vulnerabilities?
  • Are standardized interfaces and coding languages used?
  • How is user authentication applied at different levels?

Using a service gateway can ensure that access controls, PII and PAN encryption, and monitoring are consistently applied and enforced for all APIs.  This in turn reduces the likelihood that a single poorly-coded or overlooked API will compromise the entire system. Enhanced vulnerability protection is provided by a centralized point to turn away malicious exploits such as SQL injection or Cross-site scripting (XSS) attempts.  This control point also provides data leak protection for data leaving the enterprise.  The use of a gateway also allows the API provider to construct a consistent façade with standardized interfaces to be utilized for all exposed APIs and web services.

Another area where a gateway can help with PCI-DSS compliance is in containing audit scope via tokenization.  One of the design considerations for protecting cardholder data asks:

Where are the “known” data storage locations?

Using a gateway that supports tokenization can limit PCI scope to the gateway device itself.  The gateway can then be hosted on a higher-tier hosting platform (e.g. a Virtual Private Cloud) while allowing logic servers without access to cardholder data to be hosted on a more cost-effective, multi-tenant platform. A common model here is to tokenize PAN data as it enters the datacenter, minimizing scope impact, which can be done using proxy tokenization in the API gateway. This usage model is ideal for ecommerce retailers that accept credit card data over an HTML form post or other HTTP interface.

For help assessing tokenization option options, we have made available a Buyer’s Guide:  Tokenization for PCI DSS.  For the broader view covering other security gateway usage models, we are also sharing the Buyer’s Guide: Gateway Security.  Finally, we’d refer readers to the Cloud Builders program’s Cloud Security Reference Architecture for some ready-made blueprints and cloud software management platforms.

Your healthcare data in who’s hands?

A week or so back I needed to put some stuff in storage as we’re moving house. Apparently my fine heirlooms are not conducive to selling the place, I was told. The storage facility I choose was pretty local and had the look of the scene from the end of the Raiders of the Lost Ark where the Ark of the

Box 27B/6 you say? I’m sure I passed it last Friday.

Covenant is placed. Anyway, I got talking to the staff and they were pretty happy to admit (almost proud) that they stored both healthcare and legal records there. I must have an honest face.

The place itself did not have the room to have direct access to each box, instead what they did was note where a box was in the stack and then bury that stack behind other more frequently accessed stacks of boxes. I asked what happened when they lost a box and the guy on the forklift rolled his eyes and said “You don’t want to know.”

This is all well and good until you consider that occasionally its important to access medical history in a hurry. What if there’s a hold up on a diagnosis? Or a clinician needs to give drugs or treatment for an acute condition where there’s the possibility of interfering with previous meds. I know what my luck is like with admin mishaps and these are not odds I want when my (or your) health is at stake. We’re assuming that the admin from your GP went smoothly and the request for retrieving your record was made against the right box in the first place.

At this point I could go into the waste that this kind of data being tied up in paper represents when you might want to look at a population to see what effects medicines, the environment and upbringing might have on health. The fewer paper records you look at the more the doubt is about figures that come out some the more studies need to be done. In other words, it costs a country (us) in money and lives to keep medical records on paper.

I was involved in the project to try to solve this problem in the UK the first time round. It had partial success would be one way to describe how it went. Now the UK health secretary, Jeremy Hunt, has announced a second attack at the problem. A less monolithic one hopefully, one less dependant on the pain that mega SOA architectures built on heavy weight HL7 brings. It is possible to build for health data sharing a different way based on what projects need as opposed to what government ministers wish for as a legacy.  See what the Oxford MMM project is doing in this area.

We now have new ways to tackle this problem based on the ease of exposing data over lightweight APIs – maybe a bit much for a surgery but an NHS Trust IT department could get their heads round it. Big Data in healthcare means we’re not tied to huge SQL DB suppliers or having to have health data as a strongly typed with HL7 any more where that’s not needed. OAuth 2 and SAML means we can share, track and trust access tied it to staff identity.

Anyone for a high speed messaging / API governance tool that can understand health care protocols, auth tokens AND talk to backend data services? Intel ESG for Healthcare Information Exchange. Join our mobile healthcare data web seminar as well.

About Peter Logan:

App Engineer & Pre Sales for Intel’s Application Security & Identity Products Group. I spend my time wandering about looking at interesting customer problems and generally getting messaging to go better, faster and safer. What I aim to give here is general tech goodness about actually using Intel® SOA Expressway in real world applications for SOA, security, integration and more. But I hope you find its not just howtos you’ll find here. I also want to talk about why you’d want to do things this way with Expressway.

 

 

Webinar: Mobile Optimized Healthcare API Programs

Registration is now open for our next webinar on Tuesday, March 5 at 10AM Pacific, 1PM Eastern – “Mobile Optimized Healthcare API Programs:  New Revenue from Legacy Data”

Mobile apps, partner & developer API programs, and healthcare data are converging to create new revenue opportunities for Healthcare providers via API developer community portals.  We’ve assembled a panel of experts from the industry to present case studies from Aetna and the Blue Cross and Blue Shield Association that illustrate best practices for building a successful API program.

While the case studies come from Healthcare, companies any industry can apply many of the ideas from this webinar to jumpstart their own API community initiatives.  The discussion will cover topics including:

  • Gateway security patterns to safely repackage legacy data and services as APIs
  • How to construct, share, and promote APIs to developers using API workshops and branded portals
  • How to build a mobile-optimized back end that securely exposes enterprise assets via standard internet protocols (e.g. OAuth & JSON)

Our speakers will include security expert Gunnar Peterson, Mashery’s Chuck Freedman, Intel’s Blake Dournaee, and a special guest speaker from Aetna.  In addition, all attendees will receive a Mobile API Architecture White Paper.  We hope you’ll join us!

Mobile Middleware for the Enterprise (part 1)

Introduction

With the trends of consumerization and bring-your-own-device (BYOD) acceptance, enterprises are increasingly seeking to securely integrate tablets and smartphones into their environments.  Meanwhile, external customers and partners desire mobile apps that provide on-demand, self-service alternatives to traditional consumer web portals.  Mobile middleware can ease this integration, providing a consistent framework and set of interfaces for a wide range of applications and data sources.  This is the first in a series of posts intended to help the enterprise IT buyer to better understand the benefits of mobile middleware, as well as to make an informed decision when choosing among the many products in this space.

Use case 1:  Employee productivity

Mobile devices bring the potential for ubiquitous access to corporate resources, providing employees with an “always-on” connection to the enterprise.  Email, calendar, and contacts are no longer sufficient for many enterprises – Line-of-Business applications with secure access to corporate data will further improve worker productivity.

While the first stage of mobile access was delivered using off-the-shelf software packages, the next wave will include much more custom code.  According to a November 2011 Forrester study, over 50% of enterprises rely on custom applications developed either in house or by externally-contracted developers.  These applications will require access to a mix of back-end services, from existing SOAP applications to newly-developed RESTful APIs, as well as cloud-hosted services such as salesforce.com.

app-sourcing

An established enterprise may already have an ESB for internal services, or they may be using loosely-coupled, point-to-point connections between apps and services.  Either way,the ESB likely was not designed with wide-scale or external connectivity in mind.  Mobile middleware can help to bridge this gap, providing a RESTful interface to legacy services and data sources.  It can also provide enterprise mobile application developers with a catalog of available APIs and documentation on how to consume them, speeding development and increasing consistency across applications.

Use case 2:  External access

Many enterprises have offered their customers a self-service web engagement portal for some time.  Whether it is used for commerce, basic account management, or other purposes, this portal ultimately connects back into enterprise services.  With mobile browsers taking an increasing share of page views, portals that deliver substandard user experience are being reimplemented as native enterprise mobile applications.

Mobile vs. desktop browser share, 2011-2012
Source: StatCounter

While the scope of services to be accessed by external users is typically much narrower than in the employee productivity use case, the scale and security considerations are much greater.  Also, digital natives expect integration with external identity providers, social networking, and other external cloud services.  As with internal-facing applications, mobile middleware can act as a glue layer for these customer apps, providing integration with external services while securing access to internal data.

The Case for Mobile Middleware

Regardless of which use case is the primary motivator for adopting a mobilization strategy, it’s clear that legacy web and data services are not readily consumable by mobile devices.  An enterprise, then, has two options:  remediate each service independently, or adopt a mobile middleware layer that can bridge the gaps to mobile access.  Development cost savings from the mobile middleware approach will depend on the number of services to be addressed and level of integration effort required.  However, by abstracting away these integration functions, enterprises can be assured that security policies are being uniformly implemented, enforced, and updated — no easy task if custom code is added to a large number of applications.

A mobile middleware strategy can address the issues shared by both of these use cases:  providing security and broad integration capabilities while delivering the performance necessary for a responsive user experience.

Other Resources

Over the next few weeks I will explore how mobile middleware can help an enterprise to integrate its own REST and SOAP services with 3rd-party APIs.  I’ll also describe some of the security and performance considerations that go along with different approaches.  Finally I will look at the options for application development that can benefit from the a consistent, RESTful back end.

In the meantime, here are some links to other material that should be useful when building a strategy for enterprise mobile applications:

API strategy & practice conference in NYC – Are you going?

Alright, I am sure you have heard this again and again but it’s worth saying it one more time. The first ever API strategy & practice conference is going be in NYC on Feb 21, 22 (http://www.apistrategyconference.com/). If you are just finding this out, it might be way too late for you to get in (But I will tweet anything interesting happening from inside 🙂 ).  There are 72 companies that are confirmed to participate and sending their API whiz kids, gurus, learners, teachers, procrastinators there to make a difference. Intel is proud to be a Gold sponsor to this event.

Yes, Intel. Not only does Intel do software, but they do it really well too. We have an outstanding API Manager that we released recently which will be showcased there. If you happen to attend this, please stop by my 2 speaking sessions/ panels.

Day 1: 2:20-3:30 – Track 3: API Security and Scalability

As APIs gain adoption they become ever more critical gateways to a company’s core business – ensuring access is secure and scalable are mission critical for your business. Presentations include:

  1. Paul Madsen (@PingIdentity) of  Ping Identity
  2. Mark O’Neil (@TheMarkONeill) of Vordel
  3. Travis Reeder (@treeder) of Iron.io
  4. Andy Thurai (@AndyThurai) of Intel
  5. Discussion panel on the challenges and solutions for API Security and Scalability

Day 2: 11.00-12.10 – Track 1: Mobile

APIs and Mobile are symbiotic – each driving the other with a good API strategy arguably key to a good Mobile Strategy. Presentations include:

  1. Andy Thurai (@AndyThurai) of Intel
  2. Max Katz (@maxkatz) of Tiggzi
  3. Marc Weil (@marcweil) of Cloudmine
  4. Miko Matasumura (@mikojava) of Kii
  5. Discussion Panel on the evolution of API / App Ecosystems and platforms

3Scale and Kin Lane did an amazing job of putting this together again (first time was cancelled thanks to hurricane Sandy). Hope to see you all there. Don’t forget to stop by our booth. You will get a chance to see me :), and win some goodies if you stop by and make your presence known, but more importantly you will get to learn about Intel API Manager.

Looking forward to seeing you all there.

-Andy Thurai (Twitter: @AndyThurai)

Enterprise Mobile Applications at Apps World

Apps World Focus on Enterprise Mobile Applications

My team attended Apps World in San Francisco last week.  The show almost could have been called “Mobile Middleware World”.  It was clear that we’re not the only ones who think 2013 will be the year of the Enterprise Mobile App.  While the conference had plenty of independent developers and consumer-oriented tools, many of the folks stopping by our booth were focused on the enterprise.  We received several questions about our solutions for enterprise mobile applications and API management.  API providers were also bridging the gap between consumer- and enterprise-grade services, with talks and demos both days from StackMob, eBay, Box, and SendGrid.

Intel is a Software Company?

We received a number of questions related to our API management products.  Once we got past the initial question of “Intel is a software company?“, it was clear that our vision for mobile middleware is well-aligned with what developers of enterprise mobile applications are seeking.  We received positive feedback on the end-to-end capability we offer:

  • Secure and robust API management on the back end
  • Best-in-class API discovery and developer onboarding through our connection to the Mashery portal
  • HTML5 and Appcelerator provide flexible app development on virtually any device.

Digital Payments and Enterprise Mobile Applications

One of the bigger trends I saw had to do with digital payment systems.  This is a rapidly-evolving area, with virtual currency moving from games into other apps, potentially expanding into enterprise mobile applications.  Other payment systems, such as digital wallets and P2P, seemed to be top-of-mind as well.  It’s clear that mobile application and API security will be critical for success, regardless of which standards win out in these areas.

Building the Enterprise Mobile Application Factory

Also in our booth, Kin Lane gave a very popular talk on Building the Enterprise Mobile Application Factory.  If you missed his talk, it is available online.  Our HTML5 development talk was also very well-received, with many participants signing up right away for our cloud-based HTML5 developer tools.

It is shaping up to be an exciting year for us in the enterprise mobile applications and API management space.  Apps world was just the beginning.  For more information, stay tuned to this blog, follow us on twitter at @IntelAPIGateway, download our whitepaper on API Patterns for Cloud & Mobile, or check out some of our mobile middleware tutorials.