Who's a SOAsaurus?

Image of a dead dinosaur

I told you I was ill.

The phrase “Don’t be a SOAsaurus” is being bandied about on twitter and the like and it got me thinking about using that particular analogy to describe SOA Web Services practises and contrast them with the clever little RESTful API Service mammals that maybe saw off the big, ugly lizards.

Before getting into computing I did spend some time in Geology so I’m coming at this argument from a slightly odd standpoint. For any Geologists reading I was structural, ophioites and terrain docking. We used to look down on this palaeontology stuff and everyone looked down on the geophysicists.

To recap the dinosaurs then. We know them from the fossil record. To become a fossil is a one in a bazillion chance so there have to be a lot of you about. By definition any fossil you find must have come from a wildly successful species. So SOAsaurus must be some form of compliment. On top of that, dinosaurs (as we commonly refer to them) lasted over one hundred million years, dominated the land, sea and sky AND gave birth to the mammals. By that analogy REST wouldn’t be here but for SOA. In the same way SOA has had it’s time in the press and still continues to have it’s time in enterprise. Fully half of the enquiries Gartner specialists like Paolo Malinverno get are from people working on SOA, installing service based architectures XML and developing new services.

The analogy extends to our RESTful mammals as well. At night they had the advantage of heat to go out scavenging dinosaurs and stealing eggs. In the same way I see technologies scavenged from SOA; sledgehammer to crack a nut UDDI re-emerges as API portal, WSDL starts to emerge as WADL. Vendors see that the wheel is being reinvented so technologies like service and security gateways extend their functionality to encompass both worlds.

When the dinosaurs did go it had taken the combined effects of millennia of climate change from the volcanic eruptions forming a decent portion of what is now India plus the impact of a meteorite big enough to form a 200km wide crater. That was big enough to wipe out two thirds of all species on the planet. It reminds me of a major UK banking group I’ve worked with whose mainframe still ran on token ring and used a protocol older than I am! Big, successful technologies are hard to kill off for several reasons, primarily they work for the frame of reference they were put in for. We’ll be living with SOA practices running organisations for many decades yet.

But maybe I got the analogy wrong. REST isn’t the mammal after all and the dinosaurs never died out. They survived by ditching the weight and becoming more agile with less in the way of teeth. In the same way its not hard to imagine why developers want to get rid of the stack of J2EE, XML, SOAP, WS-ReliableMessaging, WS-PolicyForSecureReliablePolicyIdentityFederationPolicy…. REST represents the birds and one’s about to crap on your shoulder sometime soon.

I think I agree with SOAsaurus, I like the term. SOA gets to live as Argentinosaurus, Compsognathus or Protoceratops. REST could be Albatross, Turkey or Penguin (okay, now I’m poking fun). As Archaeopteryx was no doubt fond of saying; there’s room for a bit of both.

Of course this is all fine discussing the mammals and the dinosaurs but its the bacteria that use us and let us live in the end. Any suggestion as to the computing equivalent?

Whether you’re a SOAsaurus seeking SOA governance, are looking to evolve using REST/SOA mediation, or are already walking upright with REST and need to manage APIs, we’ve got you covered.

Follow me @PeteL0gan uk.linkedin.com/in/petelogan/.

Elastic Scaling of APIs in the Cloud

As an Enterprise Architect for Intel IT, I worked with IT Engineering and our Software and Services group on the elastic scaling of the APIs that power the Intel AppUp® center. Our goal was to scale our APIs to at least 10x our baseline capacity (measured in transactions per second) by moving them to our private cloud, and ultimately to be able to connect to a public cloud provider for additional availability and scalability. Here’s a quick set of practices we used to achieve our goal:

  1. Virtualize everything.  This may seem obvious and is probably a no-op for new APIs, but in our case we were using a bare-metal installs at our gateway and database layers (the API servers themselves were already running as VMs). While our gateway hardware appliance had very good scalability, we knew we were ultimately targeting the public cloud and that our need for dynamic scaling could exceed our ability to add new physical servers.  Using a gateway that scales in pure software virtual machines without the need for special purpose-built hardware helped us achieve our goal here.
  2. Instrument everything.  We needed to be able to correlate leading indicators like transactions per second to system load at each layer so we could begin to identify bottlenecks. We also needed to characterize our workload for testing – understanding a real-world sequence of API methods and mix/ordering of reads and writes. This allowed us to create a viable set of load tests.
  3. Identify bottlenecks.  We used Apache jmeter to generate load and identify points where latency became an issue, correlating that against system loads to find out where we had reached saturation and needed to scale.
  4. Define a scaling unit.  In our case, we were using dedicated DB instances rather than database-as-a-service, so we decided to scale all three layers together. We identified how many API servers would saturate the DB layer, and how many gateways we would need to manage the traffic. We then defined a collection of VMs that would provision all of these VMs together. We might have scaled each layer independently had our API been architected differently, or if we were building from scratch on database-as-a-service.

    Example collection for elastic scaling

  5. Repeat. The above let us scale from 1x to about 5x or 6x without any problem. However, when we hit 6x scaling we discovered that a new bottleneck: the overhead of replicating commits across the database instances. We went back to the drawing board and redesigned the back end for eventual consistency so we could reduce database load.
  6. Automate everything.  We use Nagios and Puppetto monitor and respond to health changes. A new scaling unit is provisioned when we hit predefined performance thresholds.

    Automation/Orchestration workflow

  7. Don’t forget to test scaling down.  If you set a threshold for removing capacity, it’s important to make sure that your workflow allows for a graceful shutdown and doesn’t impact calls that are in progress.

The above approach got us to 10x our initial capacity in a single data center. Because of some of our architecture decisions (coarse-grained scaling units and eventual consistency) we were then able to add a GLB and scale out to multiple data centers – first to another internal private cloud and then to a public cloud provider.

What's in a Composite API Platform?

Intel recently released what we call a composite API platform with our new API Manager product. What exactly do we mean by this?

A composite platform is a single platform for API management that handles both Public (sometimes called “Open”) APIs and Enterprise APIs. It’s composite because it exhibits both the cost savings of “cloud” through a multi-tenant SaaS partner portal coupled with the control of on-premises gateway for traffic management. Like a composite material, the mingling of two or more constituents gives the final solution different properties not found in either alone.

For a public or open API it’s important to have developers interact in a shared manner, generally done through a public SaaS partner management portal. True multi-tenant SaaS offerings gives the Enterprise cost advantages, as the partner management piece is akin to running a website for potentially thousands of developers.Running a successful website means people, resources, archival and a higher cost of ownership.

Further, Multi-tenant SaaS means developers may be using more than just your API as they may also be finding other APIs they are interested in advertised from other tenants. This is a good thing as these are the caliber of developers you want. After all, experienced developers can bring more to the table – they may even come up with an awesome app that mixes your data with a partner’s in a new way.

As flashy as the cloud is, not all Enterprises can risk complete movement to a public cloud environment, especially for security and compliance. The set of applications bound to the enterprise are sometimes called “gravity bound”, as they are part of an information system tied to a core business processes or cannot be outsourced due to compliance, privacy or security issues.

How do these applications gain the benefits of the API economy? What if you want to build an mobile app or partner app that interacts with a mainframe or legacy system? How do you ensure compliance for API traffic that involves sensitive information? What about security?

For these types of large scale environments, the Enterprise has good reasons to buy and own some of the components used to expose the API. Overall, the composite API platform really mixes the concepts of Public APIs and Enterprise APIs together.

All APIs are really Enterprise APIs, its the manner in which they are exposed and their purpose that labels then Public or “Enterprise”, but in reality they both support an Enterprise’s API strategy and we might argue that the most successful enterprises will actually have both.

An Anecdote: Is the Web Clunky?

I was at dinner with a friend who was considering enrolling in a survey class on client side web technologies. The course would cover things like JavaScript, Silverlight, HTML5, Adobe flash and the like. As she was talking, I was playing with my new Samsung Galaxy Note 2, which if you are not familiar, is somewhere between a traditional smartphone and a tablet. As a side note, that phone is pretty awesome in my book.

As she was talking about the course I gave her the phone and told her to look up the definition of a word, any word, first using the Internet and then next using an “App.” This is a simple task of course, but the experience of doing this using the web versus an app has an extremely high ‘clunkiness’ factor to it.

For the web experience, you press “Internet”, go to Google, search for “dictionary”, find one or two of the top ranked dictionary sites, wait for it to load, type in your word and find the answer amongst a panoply of side-rail ads. If you are a Google power user you can use the”definition” keyword, but not all users know about that.

For the the app experience, you open the dictionary app, put in the word and get the answer. Simple, smooth and fast. This experience is fueled by APIs, specifically an API call from the native app to the Internet or other back-end system providing the answer.

I told her, “Well, in that class you are considering, you are learning all of the technologies that enable the first experience.”

“Why would I do that?” She responded. “The first way seems so clunky.”

I considered responding with various arguments about how the web has fueled tremendous growth and the value of open standards for mark-up and a common syntax for universal resources – but then I thought of the cash-value the task at hand – getting the definition of a word, and while the web experience gets you the answer, the app experience gets the answer faster and with a better experience. When the task is well defined to a single purpose, the app shines.

The big question now is, will that native experience be restricted to devices or will it it spread to ultra-books and desktops? How much of that native app experience will spill to the larger computing devices?

Blake

Composite Distributed Applications and RESTful APIs

I was at Gartner Catalyst last week in San Diego for a luncheon keynote where I explored the concept of a composite distributed application. This is an idea that I have been chewing on for some time and is a direct result of how Enterprises are thinking about application architecture in light of “cloud” and “big data” as well as some of the trends we are seeing in our own customer base for Intel(R) Expressway Service Gateway.

First question: Where do Enterprise applications begin and end in 2012? Let’s state the obvious: the definition of an application as a monolithic piece of object code is ancient history. Let’s try the next definition, a standard n-tier shared nothing web application. This is certainly more timely, but I would also consider it dated.

If we add external cloud services, such as xPaaS (to use Gartner’s terminology) and disparate data warehouses or “big data”, located in geographically dispersed data-centers, this n-tier definition located in a single place doesn’t quite capture all of the application and may leave out important pieces. Key pieces of functionality may live “elsewhere” and this is where our standard enterprise application becomes distributed, with pieces in different physical locations as well as composite which means the inclusion of external xPaaS services such as storage, queuing, authentication or similar services.

So when we think about the larger boundaries of a composite distributed application, what are some salient properties? I came up with the following list for my talk:

Composite Distributed Application Properties

Hybridized – Includes new feature development as well as the integration of legacy code, which can be done by integrating legacy message or document formats and protocols. In other words, Enterprises don’t want to throw out existing functionality, even if it happens to be written in a different programming language

Location Independent– Important pieces of logic, persistence and functionality may be split across 1-n clouds, a mix of standard data center deployment, private cloud and public cloud. The application is essentially living across different clouds. All clouds can win.

Knowledge Complete –  As traditional enterprises emulate web companies with big data analytics and web intelligence, distributed applications must access the results of “Big Data” analytics, which are possibly owned by different factions in the Enterprise. The composite distributed application will need to aggregate results and make important predictions across these sources as well as include any relevant data warehouse and JDBC sources.

Contextual – Produces just-in-time results based on client context, device and identity. For example, the application I/O model must meet the demands of mobile devices, such as REST APIs, as well as internal enterprise stakeholders

Accessible & Performs – Produces data compatible with any client on any operating system, with minimal latency. Scales to hundreds of thousands of users where clients are a mix of smart phones, tablets, browsers, or devices.

Secure and Compliant – Meets compliance and security requirements for data in transit and data at rest, such as PCI, HIPAA and other requirements. This may involve a mix of traditional “coded-in” security,  security at the message level (via a proxy), standard transport level security, and data tokenization prior to analytics

Common Service Layer

A common theme of current Intel service gateway customers is the creation of a common service layer that unifies existing back-end services.What happens is that services grow organically on different platforms and operating systems, written in different languages but can be orchestrated under a common RESTful theme (for more background on REST fundamentals see DZone’s REST Reference Card paper). For instance, many of our customers have a mix of REST-style or SOAP web services and then use a gateway facade or layer to unify these. Unification, however, is only one of the requirements. The second requirement is external exposure to new clients and partners with appropriate performance, trust, threat, and increasingly, throttling/SLA features. Trending right now are OAuth and API key mechanisms, especially when the clients are expected to be mobile devices.

How does this architecture grow into a composite distributed application? This is where location can play a role: as enterprises adopt more cloud PaaS services, their existing services will grow beyond what is found in the data-center, to what is found outside the data-center.

For example, one large service provider that we work with uses Intel Expressway Service Gateway to create a facade for 50+ RESTful services. In the future as they adopt cloud, additional services may also be delivered from the cloud that fit under the facade,  so the RESTful facade and services together may all be properly called “the application”  – here the application is a mash-up of services split among clouds.

We call it “the application” because its all three pieces, the gateway, the internal services and the cloud services that comprises the pieces.  The next question here is how to secure these API interactions and ensure this new breed of application meets performance and compliance requirements.

I think the answer here is that you have to focus on the data itself sent and received at each API hop. This means more emphasis on tokenization and encryption, as well as an understanding of the relevant authentication and authorization controls and how they apply depending on who needs to access the data. For “Big Data” this may mean pre-processing map/reduce input to provide tokenization or encryption prior to performing analytics, essentially ensuring compliance prior to processing.

-Blake

How to Harden Your APIs by Andy Thurai

The market for APIs has experienced explosive growth in recent years, yet one of the major issues that providers still face is the protection and hardening of the APIs that they expose to users. In particular, when you are exposing APIs from a cloud based platform, this becomes very difficult to achieve given the various cloud provider constraints. In order to achieve this you would need a solution that can provide the hardening capabilities out of the box, but that still permits for customization of the granular settings to meet the solution need. Intel has such a solution and it has been well thought out. If this is something you desire this article might help you foresee the many uses and versatility.

Identify sensitive data and sensitivity of your API

The first step in protecting sensitive data is identifying it as such. This could be anything like PII, PHI and PCI data. Perform a complete analysis of your inbound and outbound data to your API, including all parameters, to figure this out.

Once identified, make sure only authorized people can access the data.

This will require solid identity, authentication, and authorization systems to be in place. These all can be provided by the same system. Your API should be able to identify multiple types of identities. In order to achieve an effective identity strategy, your system will need to accept identities of the older formats such as X.509, SAML, WS-Security as well as the newer breed of OAuth, Open ID, etc. In addition your identity systems must  mediate the identities, as an Identity Broker, so it can securely and efficiently relate these credentials to your API to consume.

You will need to have identity-based governance policies in place. These policies need to be enforced globally not just locally. Effectively this means you need to have predictable results that are reproducible regardless of where you deploy your policies. Once the user is identified and authenticated, then you can use that result to authorize the user based on not only that credential, but also based on the location where the invocation came from, time of the day, day of the week, etc. Furthermore, for highly sensitive systems the data or user can be classified as well. Top secret data can be accessed only by top classified credentials, etc. In order to build very effective policies and govern them at run time, you need to integrate with a mature policy decision engine. It can be either standard based, such as XACML, or integrated with an existing legacy system provider

Protect Data

Protect your data as if your business depends on it, as it often does, or should. Make sure that the sensitive data, whether in transit or at rest (storage), is not in an unprotected original format. While there are multiple ways the data can be protected, the most common ones are encryption or tokenization. In the case of encryption, the data will be encrypted, so only authorized systems can decrypt the data back to its original form. This will allow the data to circulate encrypted and decrypt as necessary along the way by secured steps. While this is a good solution for many companies you need be careful about the encryption standard you choose, your key management and key rotation policies. The other standard “tokenization” is based on the fact you can’t steal what is not there. You can basically tokenize anything from PCI, PII or PHI information. The original data is stored in a secure vault and a token (or pointer, representing the data) will be sent in transit down stream. The advantage is that if any unauthorized party gets hold of the token, they wouldn’t know where to go to get the original data, let alone have access to the original data. Even if they do know where the token data is located, they are not white listed, so the original data is not available to them. The greatest advantage with tokenization systems is that it reduces the exposure scope throughout your enterprise, as you have eliminated vulnerabilities throughout the system by eliminating the sensitive and critical data from the stream thereby centralizing your focus and security upon the stationary token vault rather than active, dynamic and pliable data streams.. While you’re at it, you might want to consider a mechanism, such as DLP, which is highly effective in monitoring for sensitive data leakage. This process can automatically tokenize or encrypt the sensitive data that is going out. You might also want to consider policy based information traffic control. While certain groups of people may be allowed to communicate certain information (such as company financials by an auditor,etc) the groups may not be allowed to send that information. You can also enforce that by a location based invocation (ie. intranet users vs. mobile users who are allowed to get certain information).

QOS

While APIs exposed in the cloud can let you get away with scalability from a expansion or a burst during peak hours, it is still a good architectural design principle to make sure that you limit or rate access to your API. This is especially valuable  if you are offering an open API and exposure to anyone, which is an important and valuable factor. There are 2 sides to this; a business side and a technical side. The technical side will allow your APIs to be consumed in a controlled way and the business side will let you negotiate better SLA contracts based on usage model you have handy. You also need to have a flexible throttling mechanism that can help you implement this more efficiently such as just notify, throttle the excessive traffic, shape the traffic by holding the messages until the next sampling period starts, etc. In addition, there should be a mechanism to monitor and manage traffic both for long term and for short term which can be based on 2 different policies.

Protect your API

The attacks or misuse of  your publicly exposed API can be intentional or accidental. Either way you can’t afford for anyone to bring your API down. You need to have application aware firewalls that can look into the application level messages and prevent attacks. Generally the application attacks tend to fall under Injection attacks (SQL Injection, Xpath injection, etc), Script attacks, or attack on the Infrastructure itself.

Message Security

You also need to provide both transport level and message level security features. While transport security features such as SSL, TSL provide some data privacy you need to have an option to encrypt/ sign message traffic, so it will reach the end systems safely and securely and can authenticate the end user who sent the message.

Imagine if you can provide all of the above in one package. Just take it out of the packaging, power it up, and with a few configuration steps provide most of what we have discussed above?  More importantly in a matter of hours you’ve hardened your API to your enterprise level (or in some cases better than that). Intel has such a solution to offer.

Check out our Intel API gateway solution which offers all of those hardening features, in one package and a whole lot more. Feel free to reach out to me if you have any questions or need additional info.

http://cloudsecurity.intel.com/solutions/cloud-service-brokerage-api-resource-center

 

Andy Thurai — Chief Architect & CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 20+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can find him on LinkedIn

Next Gen Enterprise API Architecture for Mobile

The Enterprise software industry has grown up around the standard three tier-architecture for web applications, which was pioneered circa 1995. This architecture is ideal for web browsers, which have become the universal client of the Enterprise.

With the introduction of Enterprise mobile applications, we are seeing new avenues for innovation, new user experiences and increased convenience. In some ways, however, we are rolling back the clock.

Allow me to clarify: If we accept the premise that native mobile applications deliver the best functionality on disparate mobile platforms, we are at the cusp of re-introducing “thick client” applications back into the enterprise. Native mobile applications are rich in their design and functionality but behave like monolithic applications: They provide their own persistence tier, slick user-interfaces, natively compiled code, require upgrades and updates on the client device, and utilize a mix of synchronous and asynchronous communication. Sure they use REST for communication, but is this due to historical accident?

Other than the physical platform itself (which is a smartphone or tablet), native mobile applications may have more in common with old “Win32 client/server apps” that existed before the browser revolution. Are we moving forwards or backwards?

Further, what about web mobile applications that run in the browser on the mobile device? How do they factor in? How do new technologies like HTML5 affect these types of applications? How do REST APIs affect the mobile architecture?

Is the Enterprise ready for mobile? How does the standard three tier architecture fare in light of these trends?

I try to get a handle on these issues in our new whitepaper, A Unified Mobile Architecture for the Modern Data Center

Happy Reading,

Blake

Andy Thurai on, “the API – You Can’t Live Without It”

The unprecedented explosion of modern technologies combined with a burgeoning mobile space has forced enterprises to rethink previously held beliefs about the static enterprise perimeter. Remember the olden days when you said your enterprise was completely self-contained in one data center, with your apps inside the firewall and with everyone nearly as confident about it as being as secure as Ft. Knox?  With an explosion in mobile computing, demand for cheap or “free” usage of resources, and a sharp reduction in cost with the cloud delivery model,  it is expected (or rather demanded) that every enterprise expose their APIs not only from their enterprise but from a cloud based model. (NOTE:  The cloud is referred to in a  loosely defined delivery model be it —  public, private, community or hybrid variety).

Couple this inexorable progression for having a cloud based model with the need for mobile enablement and web 2.0 technologies,  and you are forced to expose not only your SOAP APIs,  but also JSON, REST and other fast, quick TTM (time to market) APIs that can be easily manipulated and consumed.

This brings an interesting issue to the fore-front. You are forced to rethink your corporate security strategy. Many organizations (and the C levels that I speak with on a regular basis) are scared to move their sensitive applications (and processes, data) to the cloud, mainly, because of security. But that doesn’t stop them from exploring and moving some of the non-sensitive applications to the cloud and “testing the waters”, so to speak. Once they see how easy and cheap it can be, they begin losing sleep thinking about all of the money they can save by moving everything to the “cloud” due to the constant pressure to plan and come in under budget.

It’s no wonder that API traffic has exploded over the past few years. According to a recent survey, about 60% of the enterprise traffic is API based. According to Programmable Web,  75% twitter traffic is API based. According to Programmable Web there are at least 5000+ APIs (http://blog.programmableweb.com/2012/02/06/5000-apis-facebook-google-and-twitter-are-changing-the-web/) and the pace is growing. Programmable Web has a neat tool where you can search all the publicly available APIs (http://www.programmableweb.com/apis/directory). If you check this out you will immediately notice that most of the social APIs are mostly REST/ JSON based. There is obviously a good reason for that.

When it comes to APIs there are two distinct, broad categories – Social APIs and Enterprise APIs. The Social APIs are created by, and for, our society which is hungry for instant data updates. (Remember the AT&T 4G commercial “so 42 seconds ago”  (http://www.youtube.com/watch?feature=player_embedded&v=bvVVQGgbKk0) . I miss the good old days where we found out what happened in the world by checking CNN website once an hour or so.

In general, the social APIs tend to be fast,  easy to implement, REST only — without any enterprise class security, not monetized,  and focused on publishing  content etc.

You can’t afford to have the enterprise APIs published and consumed the same way. Your Enterprise class security needs to move with your applications API wherever it is going or however it is accessed.  And it is not a question of if, it is a question of when. The success of companies with API as the core of their business models transformed the industry – look at Google, Twitter, Facebook, and other smaller players. According to Programmable Web “The most popular API category from the last 1,000 APIs is government. In total, we list 231 government APIs and nearly half of them have been added in the last four months.”  When the government adopts a technology standard, you know that there is no going back, it is here to stay forever .

As applications migrate out of your own “Ft. Knox”,  the issue will become more pronounced. You’ll still need the same quality of security, management, SLAs,  centralization of usage based information – predicated on policy & identity information.

Most cloud providers just give you the base platform and leave most of this to you.  However, your enterprise class APIs need to provide enterprise class security, governance, lifecycle management , API Key and credential management, throttling and quota management, security, protocol translation and versioning, API performance optimization, key management, discovery. The need to expose your APIs in  multiple formats (as talked above such as REST, JSON, SOAP, etc), can multiply the complexity of an implementation exponentially.

Having set the stage (without wanting to scare you about the inherent risks of exposing your APIs to the cloud), let’s talk about how Intel can help you effortlessly achieve all of these things regardless of your usage model —  without the need to be concerned about whether  APIs are REST based, or full SOAP APIs or even JSON based mobile APIs.

Intel has been in the Web Services, XML, SOAP security space since the acquisition of Sarvega (circa 2005).  Our expansion into the API security space has been a natural progression. We brought out an API security gateway last year which caught the attention of many of our customers. Especially given that it can help enterprises move enterprise grade security policies without having to rewrite the policies (and allow for subsequent enforcement of them in the cloud) makes it even more interesting.

With the addition of OAuth 2.0 to the API gateway in our latest release, it seems like a timely opportunity to talk about the capabilities of our API gateway. When you move your enterprise applications to the cloud and expose APIs from there,you can either retool your application to fit that platform/ delivery model . Or, you have a second option. Use our API gateway as the API middleware which can help you solve a lot of those issues. APIs have become strategic control points for the cloud.

So essentially you want to abstract the following functionality to API middleware:

  1. Keep your implementation technology agnostic. Provide a mechanism to support REST, JSON, SOAP, etc and mediate to the backend supported format in a non-intrusive manner. Most times this end result can be achieved by configuring the API gateway solution to act as a facade to the existing application. This is really important in the ever changing API world.  JSON, REST APIs have evolved in the past few years.  By being agnostic, you’ll be prepared for the next “flavor” in whatever way that instantiates itself.
  2. Keep your security and API management closer to your APIs and be transparent about it with your  customers.
  3. Remove security, scalability, management and audit functionality and issues away from the an actual API implementation.
  4. Ensure that you have strong API monitoring, metering, logging, auditing, & versioning features.

Check out our API Gateway details to see how we can help you make this migration easy and painless.

http://software.intel.com/en-us/articles/Cloud-Service-Brokerage-API-Resource-Center/

For more information about Intel Expressway Service Gateway, case studies, testimonials and tech tutorials, please visit www.intel.com/go/identity

Andy Thurai — Chief Architect & CTO, Application Security and Identity Products, Intel. Andy Thurai is Chief Architect and CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Governance, Security, and Identity solutions for their major corporate customers. In his role he is responsible for helping Intel/McAfee field  and technical teams and customer executives. Prior to this role he has held technology and architecture leadership and executive positions with L-1 Identity Solutions, IBM Datapower, BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 20+ years of IT experience.

Andy Thurai on “Social SOA with API Gateway”

In a recent conversation with a large customer of ours, some interesting facts came to light. This blog is a recapitulation of the insights I got from that discussion. I’ll not only tell you how this customer is using our solution, but also, how it is helping them to take their online presence to the proverbial next level.

Our customer, an online university, is using our solution, as middleware – providing both security and data mediation functions, to push through SOAP &  REST API transactions to the backend. They are processing about 18 million messages per day. Now think about that for a second. The number in itself is mind staggering. While most educational institutions use freeware middleware solutions due to being part of an ultra cost-conscious milieu, this University decided to use our solution to bring their presence to a whole new level  — while still doing so in a completely cost effective fashion.

We also helped the University  integrate with a home grown single sign-on solution fairly easily so they would not be forced to “rip and replace” all of their technology,  unlike some of the implementation plans that would be thrust upon them by some of our competitors.  We integrate with identity management systems,  as well solutions that address governance, various registries ,and an array of monitoring solutions.  For us, it’s never about pushing an entire stack to a customer. Instead we feel customers should have the latitude to choose a technology from a range of available options, consistent with a “best of breed approach.”

Though it initially started off as more of an academic security experiment for a University, our solution has been embraced much more widely and has grown into a solution that encompasses SSL offloading, XSLT transformation, service aggregation, and service mediation. In addition, our solution is being used to abstract the authentication layer to communicate with a custom authentication service. We provide the backbone of their social SOA.

The initial services were mostly SOAP based, however, when the REST services were ready — we were ready too,  to help them out with a product that similarly could address all of the same relevant security concerns.

The true reason everyone is excited, though, is  because the University is looking to move their service offerings to the cloud.  At first glance, moving all your services (or even just a service abstraction layer) to the cloud and exposing that 24×7 to hackers can be quite the daunting task!  Another concern revolves around their customers’ resource utilization.  Especially when you are offering your  services for free (at least most of the time,) if you expose those services without throttling  them,  can be asking for a lot of trouble. Rest assured —  Intel has a feature built in our solution set that will help them with both their security concerns and their ability to implement throttling .

Our Quality of Service (QOS) functionality allows service providers to limit the usage of services, a classic need in a cloud delivery model which is often overlooked due to the perception about the elasticity of the cloud. In my mind, just because you can throw resources without any limit – ignoring fundamental architecture design principles such as TOGAF, DoDAF, Zachman – should be a huge concern, and “top of mind” for everyone. While you can implement some of these functions at the application/services level,  , a lot of overhead will be added to the application itself.  Moreover, here will be no uniformity across applications on how this feature is implemented.

If, on the other hand, you  were to use our QOS functionality –   you can monitor API usage, meter the usage based on the identity of the user (technically,  not only based on the identity, but you can even go lower than that. Think more along the lines of something location + identity + invocation based).

You not only can limit the service usage based on predefined policies,  but you can enforce those policies globally.  Our solution provides for the ability for a backend application to recover in case of overload. This built-in “self healing” feature should allow many services to recover without a need to bounce / reboot often. And the built-in auditing, reporting, logging tool keeps extensive details so it can be used not only for a forensic analysis, should the need arise,  but also when implementing a charge back system if so desired.

For more information about Intel Expressway Service Gateway, case studies, testimonials and tech tutorials, please visit www.intel.com/go/identity

Andy Thurai — Chief Architect & CTO, Application Security and Identity Products, Intel. Andy Thurai is Chief Architect and CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Governance, Security, and Identity solutions for their major corporate customers. In his role he is responsible for helping Intel/McAfee field  and technical teams and customer executives. Prior to this role he has held technology and architecture leadership and executive positions with L-1 Identity Solutions, IBM Datapower, BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 20+ years of IT experience.

Federal CIO VanRoekel details his ‘first’ priorities

With nearly three months on the job, federal chief information officer Steven VanRoekel is revisiting some long-standing technology priorities.

VanRoekel gave his first major policy speech recently, since taking over for Vivek Kundra in August, signaling how he plans to move the administration’s IT reform ball forward.

In this Federalnewsradio.com post, read about how:

  • OMB will promote “Share first” policy –The Office of Management and Budget will begin promoting a “share first” policy. VanRoekel said the idea is to have agencies look to others when buying technology or upgrading systems before going off on their own.
  • “I envision a set of principles like XML First, Web Services First, Virtualize First and other firsts that will inform how we develop our Government’s systems.”
  • “All of these elements are really grounded in the foundation that is cybersecurity.”

 

 

 

Toward these goals, you can deploy Intel Expressway Service Gateway, a purpose-built cross domain service gateway that enables secure collaboration amongst agencies.

You can address perimeter defense with wire speed XML threat protection, complex security policy enforcement and ready multi-factor integration to identity infrastructure.

And you get the Intel advantage since Intel Expressway Service Gateway has been engineered to take advantage of Intel hardware optimizations to deliver best in class performance and hardened, high-assurance security.

Please reach out to us at  intelsoainfo@intel.com or call 978-948-2585 if you need assistance.