Composite Distributed Applications and RESTful APIs

I was at Gartner Catalyst last week in San Diego for a luncheon keynote where I explored the concept of a composite distributed application. This is an idea that I have been chewing on for some time and is a direct result of how Enterprises are thinking about application architecture in light of “cloud” and “big data” as well as some of the trends we are seeing in our own customer base for Intel(R) Expressway Service Gateway.

First question: Where do Enterprise applications begin and end in 2012? Let’s state the obvious: the definition of an application as a monolithic piece of object code is ancient history. Let’s try the next definition, a standard n-tier shared nothing web application. This is certainly more timely, but I would also consider it dated.

If we add external cloud services, such as xPaaS (to use Gartner’s terminology) and disparate data warehouses or “big data”, located in geographically dispersed data-centers, this n-tier definition located in a single place doesn’t quite capture all of the application and may leave out important pieces. Key pieces of functionality may live “elsewhere” and this is where our standard enterprise application becomes distributed, with pieces in different physical locations as well as composite which means the inclusion of external xPaaS services such as storage, queuing, authentication or similar services.

So when we think about the larger boundaries of a composite distributed application, what are some salient properties? I came up with the following list for my talk:

Composite Distributed Application Properties

Hybridized – Includes new feature development as well as the integration of legacy code, which can be done by integrating legacy message or document formats and protocols. In other words, Enterprises don’t want to throw out existing functionality, even if it happens to be written in a different programming language

Location Independent– Important pieces of logic, persistence and functionality may be split across 1-n clouds, a mix of standard data center deployment, private cloud and public cloud. The application is essentially living across different clouds. All clouds can win.

Knowledge Complete –  As traditional enterprises emulate web companies with big data analytics and web intelligence, distributed applications must access the results of “Big Data” analytics, which are possibly owned by different factions in the Enterprise. The composite distributed application will need to aggregate results and make important predictions across these sources as well as include any relevant data warehouse and JDBC sources.

Contextual – Produces just-in-time results based on client context, device and identity. For example, the application I/O model must meet the demands of mobile devices, such as REST APIs, as well as internal enterprise stakeholders

Accessible & Performs – Produces data compatible with any client on any operating system, with minimal latency. Scales to hundreds of thousands of users where clients are a mix of smart phones, tablets, browsers, or devices.

Secure and Compliant – Meets compliance and security requirements for data in transit and data at rest, such as PCI, HIPAA and other requirements. This may involve a mix of traditional “coded-in” security,  security at the message level (via a proxy), standard transport level security, and data tokenization prior to analytics

Common Service Layer

A common theme of current Intel service gateway customers is the creation of a common service layer that unifies existing back-end services.What happens is that services grow organically on different platforms and operating systems, written in different languages but can be orchestrated under a common RESTful theme (for more background on REST fundamentals see DZone’s REST Reference Card paper). For instance, many of our customers have a mix of REST-style or SOAP web services and then use a gateway facade or layer to unify these. Unification, however, is only one of the requirements. The second requirement is external exposure to new clients and partners with appropriate performance, trust, threat, and increasingly, throttling/SLA features. Trending right now are OAuth and API key mechanisms, especially when the clients are expected to be mobile devices.

How does this architecture grow into a composite distributed application? This is where location can play a role: as enterprises adopt more cloud PaaS services, their existing services will grow beyond what is found in the data-center, to what is found outside the data-center.

For example, one large service provider that we work with uses Intel Expressway Service Gateway to create a facade for 50+ RESTful services. In the future as they adopt cloud, additional services may also be delivered from the cloud that fit under the facade,  so the RESTful facade and services together may all be properly called “the application”  – here the application is a mash-up of services split among clouds.

We call it “the application” because its all three pieces, the gateway, the internal services and the cloud services that comprises the pieces.  The next question here is how to secure these API interactions and ensure this new breed of application meets performance and compliance requirements.

I think the answer here is that you have to focus on the data itself sent and received at each API hop. This means more emphasis on tokenization and encryption, as well as an understanding of the relevant authentication and authorization controls and how they apply depending on who needs to access the data. For “Big Data” this may mean pre-processing map/reduce input to provide tokenization or encryption prior to performing analytics, essentially ensuring compliance prior to processing.



One Response to “Composite Distributed Applications and RESTful APIs”

  1. Jason Armstrong Says:

    Excellent post and information. One challenge that I have seen in internal and external cloud implementations of SOA are in services that have a high transaction rate plus extreme sensitivity to response time delays and therefore struggle to navigate the different cloud units. For example, if a service must navigate through 1-n clouds to complete its flow then even normal transmission latency may affect the overall ability to meet tight SLAs. While that latency may not be an issue for a service with a SLA of several seconds, SLAs that require responses in less than 1 second struggle. The usually incorrect response from a service designer is to consolidate the finer grained/flexible services into more course grained/inflexible services with the thought that they solved the transmission latency issue by “removing the hops.” However, these services quickly become monolithic applications. How do you work through that grey line of gain vs. loss when implementing in 1-n clouds for a single service flow?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: