Are you PCI DSS compliant yet? What is stopping you?

The PCI tokenization solution show case at NRF was a grand success. I never would have believed the traffic through our booth and the interest. First of all, the show was huge!!! I am not kidding. Last year the attendance was 25,500 (http://www.nrf.com/modules.php?name=News&op=viewlive&sp_id=1302) and I am pretty sure this year they surpassed that. (Last count puts it at 27,600)NRF show

Intel had a big booth there and predominantly displayed was our PCI tokenization solution. The reason why our solution gained much visibility is, as one customer put it, you provide compliance and risk mitigation in one place.

The most effective PCI tokenization solution MUST have:

  1. Have the ability to create a security story NOT just a compliance story (I will blog about this later). In other words, not only reduce PCI scope but helps you protect card holder data
  2. High speed, high performing tokenization solution that is a capable of producing 10s thousands of tokens in a second, if needed
  3. A hardware based true random token generator
  4. Capable of producing upwards of 2 B tokens to scale up
  5. Proxy tokenization method without a need to touch any of your existing systems
  6. Not only the solution should be able to “automagically” detect PAN numbers but also allows you  to preserve certain digits for routing, identification purposes on needs basis
  7. Allow you to use tokens as a surrogate for the original credit cards every time – “multi-use” tokens
  8. Allow you to either BYOD (Bring your own Database) or use an extra hardened, highly secure database provided for you
  9. Can handle data in any format and in any incoming channel
  10. Secure enough to do the tokenization in DMZ if needed
  11. Can work anywhere within enterprise, extended enterprise, including partner locations or virtual environments such as in the cloud

Checkout Intel’s Tokenization Buyers’ guide on how to do this the effective way.

You are Gazetted…

Recently the government of Singapore passed a bill (or “Gazetted” as they call it, which sounds a lot fancier) about protecting personal data of consumers:

http://www.mica.gov.sg/DPbillconsultation/Annex%20D_Draft%20PDP%20Bill%20for%20Consultation.pdf

“Protection of personal data

26. An organisation shall protect personal data in its custody or under its control by making reasonable security arrangements to prevent unauthorised access, collection, use, disclosure, copying, modification or disposal or similar risks.

Cross-border Transfers

The PDPA also permits an organisation to transfer personal data outside Singapore provided that it ensures a comparable standard of protection for the personal data as provided under the PDPA (Section 26(1)). This can be achieved through contractual arrangements.”

So what they are suggesting is that gone are the days that if a business loses its customers’ data, they tell the consumers, “Oops, sorry, we lost your data…………” and that is about it. Now, the governments are taking initiatives that can hold the companies responsible for being careless with consumer data and not protecting it with their life, if not face consequences.

http://europa.eu/rapid/press-release_IP-12-46_en.htm?locale=en

This means, as a corporation, you need to protect not only the data in storage and in transit, but also given the cross-border restrictions (this is especially strictly enforced in Europe; read about them on above URL links) you need to figure out a way to keep the data and the risk to yourself instead of passing this on to third parties. The easiest way to achieve that would be to tokenize the sensitive data, keep the sensitive data in your secure vault and send only the tokens to the other end. Even if the other end is compromised, your sensitive data and your integrity will be intact, and it will be easy to prove in case of an audit that you went above and beyond not only to comply with requests/ laws such as this, but also you genuinely care for your customers’ sensitive personal data. Brand reputation is a lot more important than you think.

Check out some of my older blogs on this topic:

Who is more sensitive – you or your data?

Content/ Context / Device aware Cloud Data Protection

Part 2: Context aware Data Privacy

Also, keep in mind Intel Token Broker and Cloud Security Gateway solutions can help you solve this fairly easily without messing with your existing systems too much.

Check out more details on Intel cloud data privacy solutions.

Effective PCI Tokenization Methods

Recently a colleague and a friend of mine wrote a great article about different ways to be PCI 2.0 compliant by tokenizing PAN data. If in case you missed it I want to draw your attention to it.

Essentially, if you are looking to be PCI-DSS 2.0 compliant there are few ways you can achieve that. The most painful would be obviously a rip-and-replace strategy and the easiest would be to do it in an incremental, less intrusive method.

First approach, the Monolithic big bang approach, is the legacy way of doing things. Once you figure out the areas of your system that are non-compliant (that is either storing PAN data –encrypted or not, or processing PAN in clear), you decide whether you need that component to be PCI compliant. As the PCI audit is very extensive, time consuming and very methodical, in which every process, application, storage, database, and system will be looked at and thereby it becomes very expensive. Once you figure out which components need to be PCI compliant you can do the rip and replace approach in which you will touch every system component that needs to be modified and rewrite the system to become compliant. This might involve touching every component and change your entire architecture. This essentially will be the most expensive, painful and the slowest before you can be compliant. While this can be the most effective for spot solutions, this could be an issue if you have to do this every time when the PCI-DSS needs change (which seems to be every year).

Second approach, API/SDK based tokenization is much more effective. In this case, you can retrofit applications, processes, systems, databases, etc. by making those components call an API (or SDK) which will convert the PAN data and return a token which can be used to replace the original PAN data. This requires you to do minimally invasive procedures. While this doesn’t require you to change your entire architecture/ system it still requires you to touch all those components that need to be compliant. Effectively this method is a lot faster and quicker to the market, while also giving you an opportunity to change quickly when the PCI-DSS needs change.

The third approach is called a gateway approach. In this you essentially monitor the traffic between components and tokenize/ de-tokenize the data in transit. This is also known as in-line tokenization. This method is effectively the cheapest, and the quickest to the market. But the biggest advantage is that your changes to your existing systems will be very minimal to nil. Essentially, you make the PAN data flow through the gateway which will take care of converting the PAN data to tokens before it hits your systems. Imagine the painful exercise when you have to make your Mainframe and legacy systems compliant by having them deal with tokenized data if you were to re-code those legacy systems. This method will essentially eliminate that.

You can read his entire article here.
Cost Effective PCI DSS Tokenization for Retail (Part I)
Cost Effective PCI DSS Tokenization for Retail (Part II)

Also, don’t forget to check out our tokenization buyer’s guide here.

 

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

 

Why are APIs so popular?

Kin Lane recently wrote a couple of blogs about why copyrighting an API is not common. I couldn’t agree more that copyrighting APIs is uncommon. First of all, the API definition is just an interface (It is the implementation detail that is important, and needs to be guarded), so it doesn’t make any sense to copyright an interface. (It is almost like copyrighting a pretty face :) ). Secondly, the whole idea of exposing an API is you are looking for others to finish the work you started by just providing the plumbing work. Why would anyone want to get involved with a copyrighted API and finish your work for you?

Kin Lane says, “API copyright would prevent the reuse and remix of common or successful API patterns within a space. We are at a point where aggregating common, popular APIs into single, standardized interfaces is emerging as the next evolution in web and mobile app development.”

http://apivoice.com/2012/12/08/api-copyright-would-restrict-api-aggregation/index.php (to read his complete blog).

We have gone from the services aggregation concepts to mashups, and now I am seeing the newer trend of API aggregation.

Keep in mind APIs are generally offered by vendors who want to expose a specific functionality or platform. If you need cross platform, cross provider, cross functionality options, you need to have API aggregations. Remember during the services days how much of a hard time we used to have in integrating and aggregating services from different vendors? I know some companies are making a good living by just building aggregated APIs. :)

One of the common usage patterns I see time and again is customers use the strongest points from vendors of their choice. This was not possible when you were building services. You ended up buying one vendor stack, and you were limited what was offered by them, unless you custom built the weak parts by yourself.

Now imagine the power of what you are getting now. You are cherry picking the best of breed platforms, best of possible functionalities from multiple vendors of your choice and liking.

I highly encourage you to check out the following solution brief that describes a composite API platform architecture where Intel® has packaged the market leading Mashery API sharing portal with Intel’s gateway security & integration technology to deliver Intel® Expressway API Manager.

 

 

Please contact me if you need more information. I’m more than happy to send you any additional information that you may need.

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

 

 

Context Aware Data Privacy (part II)

If you missed my Part 1 of this article, you can read it here when you get a chance (link).

As a continuation to part 1, where I discussed the issues with Data Protection, we will explore how to solve some of those issues in this article.

People tend to forget that hackers are attacking your systems for one reason only –  DATA. You can spin that any way you want, but at the end of the day, they are not attacking your systems to see how you configured your workflow or how efficiently you processedyour orders. They could care less. They are looking for the golden nuggets of information that either they can either resell or use to gain some other kind of monetary advantage. Your files, databases, data in transit, storage data, archived data, etc. are all vulnerable and will be of value to the hacker.

Gone are the old days when someone was sitting in mom’s basement and hacking into US military systems to boast about their ability amongst a small group of friends. Remember Wargames,  the movie?  Modern day hackers are very sophisticated, well-funded, often in for-profit organizations, and are backed by either big organized cyber gangs or by other entities within their respective organizations.

So you need to protect your data at rest (regardless of how the old data is – as a matter of fact, the older the data, the chances are, they are less protected), data in motion (going from somewhere to somewhere – whether it is between processes, services, between enterprises, or into/from the cloud or to storage), data in process/usage. You need to protect your data with your life.

Let us closely examine the things I said in my last blog (Part 1 of this blog), the things that are a must for a cloud data privacy solution.

More importantly, let us examine the elegance of our data privacy gateways (code named: Intel ETB – Expressway Tokenization Broker) that can help you with this costly, scary, mind-numbing experience go easily and smoothly. Here are the following elements that are embedded in our solution that are going to make your problem go away sooner.

1.       Security of your sensitive message processing device

As they say, Caesar’s wife must be above suspicion (did you know Caesar divorced his wife in 62 BC). What is the point of having a security device that inspects your crucial traffic, if it can’t be trusted? You need to put in a solution/devices where a vendor  can make assertions regarding security and have the necessary certifications  to back up those claims. This means that a third party validation agency should have tested the solution and certified it to be ‘kosher enough’ for an enterprise, data center or cloud location. The certification must include FIPS 140-2 Level 3, CC EAL 4+, DoD PKI, STIG vulnerability tested, NIST SP 800-21, and support for HSM, etc. The validation must come from recognized authorities, not just from the vendor.

2.       Support for multiple protocols

When you are looking to protect your data, it is imperative that you choose a solution that not only can handle the HTTP/ HTTPS/ SOAP, JSON, AJAX and REST protocols. In addition, you need to consider whether the solution supports all standard protocols known to the enterprise/cloud, with “Legacy” protocols such as JMS, MQ, EMS, FTP, TCP/IP (and secure versions of all of the above) and JDBC. More importantly, you also need to determine whether the solution can speak industry standard protocols natively such as SWIFT, ACORD, FIX, HL-7, MLLP, etc. You also need to look at whether or not the solution has the capability of supporting  other custom protocols that you might have. The solution you are looking at should give you the flexibility of inspecting your ingress and egress traffic regardless of how your traffic flows.

3.       Able to read into anything

This is an interesting concept. I was listening to one of our competitor’s webcasts… there was complete silence when what appeared to be a dreaded question, was asked of the person speaking on behalf of that company: “How do you help me protect  a specific format of data that I use in transactions with a partner?”Without hesitation, the presenter answered the question by  suggesting their solution lacked support for it. While I’m not trying to be unnecessarily abrasive, the point is that you should have the capability to be able to look into any format of data that is flowing into, or out of, your system when the necessity arises. This means that you should be able to inspect not only XML, SOAP, JSON, and other modern format messages. A solution should be able to retrofit your existing legacy systems to provide the same level of support. Message formats such as COBOL (oh yes, we will be doing a Y10K on this all-right), ASCII, Binary, EBCDIC, and other unstructured data streams that are of equal importance. Sprinkle in the industry format messages such as SWIFT, NACHA, HIPAA, HL7, EDI, ACORD, EDIFACT, FIX, FpML to make the scenario interesting. But don’t forget our good old messages that can be sent in conventional ways such as MS Word, MS Excel, PDF, PostScript and good old HTML, etc. You need a solution that can look into any of these data types and help you protect the data in those messages seamlessly.

4.       Have an option to sense not only the sensitive nature of the message, but who is requesting it and on what context and from where

This is where we started our discussion. Essentially, you should be able to not only identify data that is sensitive,  but take necessary actions based on the context. Intention, or heuristics, are a lot more important than just sensing something that is going out, or in. So this essentially means you should be able to sense who is accessing what, when, from where, and more importantly from what device. Once you identify that, you should be able to able to determine how you may want to protect that data. For example, if a person is accessing specific data from a laptop from within the corporate network, you can let the data go with the transport security, assuming he has enough rights to access that data. But if the same person is trying to access the same data using a mobile device, you can tokenize the data and send only the token to the mobile device. (This allows you to solve the problem where location is unknown as well. ) All conditions being the same, the tokenization will occur based on a policy that senses that the request came from a mobile device.

5.       Have an option to dynamically tokenize, encrypt, format preserve the encryption based on the need

This will allow you to be flexible to encrypt certain messages/ fields, tokenize certain messages/ fields or employ FPE on certain messages. While you are at it, don’t forget to read my blog on why Intel’s implementation of the FPE variation is one of strongest in the industry here.

6.       Support the strongest possible algorithms to encrypt, storage, and use the most random possible random number for tokenization

Not only should you verify the solution has strong encryption algorithm options available out of the box (such as AES-256, SHA 256, etc.), but you should also ensure that the solutions delivers cutting edge security options when they become available – including support for the latest security updates.

7.       Protect the encryption keys with your life. There is no point in encrypting the data, yet giving away the “Keys to the Kingdom” easily

Now this is the most important point of all. If there is one thing you take away from this article let this be it: When you are looking at solutions, make sure that not only that a solution is strong on all of the above points, but most importantly, ensure that you  protect the proverbial keys with your life. This means the key storage should be encrypted, and  should be capable of having: an SOD (separation of duties), key encrypting keys, strong key management options, key rotation, re-key options when the keys need to be rotated/expired or lost, key protection, key lifetime management, key expiration notifications, etc. In addition, you also need to explore if there is an option to integrate with your existing key manager in house such as RSA DPM (the last thing you need is to disrupt the existing infrastructure by introducing a newer technology).

8.       Encrypt the message while preserving the format so it won’t break the backend systems

This is really important if you want to do the tokenization or encryption on the fly without the backend or connected client applications knowing about it. When you encrypt the data and  preserve its format, it will not only look and feel the same as the original data, but the receiving party won’t be able to tell the difference.

If you are wondering Intel comes into the picture in this area, we address of all of the discussion points mentioned in #1 to #8, with our Intel Cloud data privacy solution (a.k.a. Intel ETB – Expressway Token Broker) and a lot more. Every single standard that is mentioned in here  is supported, and we are working on adding the newer, better standards as they come along.

Check out information about our tokenization and cloud data privacy solutions here.

Intel Cloud Data Privacy/ Tokenization Solutions

Intel Cloud/ API resource center

I also encourage you to download the Intel Expressway Tokenization Broker Data Sheet:

 

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

 

Content / Context / Device Aware Cloud Data Protection

In this two-part blog, I am going to talk about the Intel Cloud Data protection solution that helps our customers utilize their data, in both a context and content-aware manner.

This is a newer set of technologies that has hit the market in the last few years. In the past, we used to think just encrypting the transport layer (such as TLS/SSL) was good enough. Given the complex nature of services and API composition, we quickly realized that it was not enough. Then we moved to protect the messages (most of the time,  the entire message), or at a field level to protect the specific sensitive fields. The problem with any of these scenarios was that it was somewhat static in nature; somewhere there was a definition of what “sensitive data” is, and details related to strict protection of that data. However, when there is a real need to send sensitive data out and a need to protect that, making sure only the authenticated party can receive and/or use the message is critical.

Content Context Device Aware Cloud Data Protection

Essentially “Content/Context Aware” data protection is data protection on steroids. Remember in prior years when we used the DLP technologies, identified data leakage/ data loss based on certain policies/ parameters and stopped the data loss but did nothing about it? The problem with DLP is that it is passive in most cases. It identifies sensitive data based on some context/policy combination and then blocks the transaction. While this can work for rigid enterprise policy sets, this may not work for cloud environments where you need these policies to be flexible. The issue with that is when someone really needs to have that data (who is authorized for it), it is unacceptable to have the transactions stopped.

What if there were a way to provide data protection which would be identity aware, location aware, invocation aware — and yet, would be policy based, compliance based, and more importantly, very dynamic? In other words, what if you were to provide data protection based on content and context awareness? Gone are the days in which you ensure that your systems are compliant, and you are done. Read my blog on why getting compliant is not enough anymore. (link here). That is because your data is NOT staying within your compliant enterprise Ft. Knox anymore; it is moving around. Getting your systems compliant, risk averse and secure, is just not good enough as your data is moving through other eco-systems, not just yours.

When you move your data through cloud providers (especially public cloud) and add removable devices (mobility) to the mix, the issue gets even more interesting. Sprinkle data residency issues on top of that to spice it up.

First of all, take a look at your cloud provider contract closely if you haven’t done so already.

  1. Are there any guarantees on where the data is stored (in other words, the location of the data residency)?
  2. Are there any guarantees on where the data will be processed (or the location of data processing)?
  3. Are they willing to share the liability with you if they lose your or your customer’s data?

Yes, some providers are better than others, but I have seen some other contracts, that give me heart palpitations. No wonder companies are scared to death about protecting their data when moving to the cloud!

The data residency issues are especially big for some of our European customers. This is certainly true for multi-country services, where one has to restrict data residency for data at rest,  but also where mandates exist for where data can be processed. Imagine when you are dealing with financial, healthcare and other sensitive data for a specific country and they ask that you not only store that data in a place that is within legal boundaries of that country, but also ask that you process the data within the data centers located in their country as well.  You are faced with yet additional requirements including a need to sanitize data, route messages to services located in a specific place, desensitize the data for processing, and sanitize it again for storage.

Essentially, your solution needs to be:

  1. Have a strong encryption engine which has all the possible security certifications that you can think of – such as FIPS 140-2 Level 3, DoD PKI, CC EAL 4+, etc.
  2. Use very strong encryption standards/ algorithm for data, whether in storage or in transit.
  3. Protect the encryption keys with your life. There is no point in encrypting the data yet giving away the “Keys to the Kingdom” easily.
  4. Have a solution that can sanitize the data very dynamically and very granularly, based on either pre-defined policies (such as XACML, etc.) or DLP based.
  5. Make a decision based on the content/context and protect the data based on the need. This means having the flexibility to encrypt the entire message, specific sensitive data in the message, have an option to preserve the format of the sensitive data of the message and/or tokenize the data based on the need.
  6. Encrypt the message while preserving the format, so it won’t break the backend systems.
  7. Tokenize the PCI and/or PII data for compliance and security reasons.
  8. Scrutinize the message more deeply if the message is intended to go to a non-secure location/ endpoint – such as mobile devices, cloud location, third world country, etc.
  9. Comply with data residency issues by mandating the processing and storage of data in to a specific instance of the service based on where it is located.
  10. Have an elaborate access-control mechanism to the data based on user/ application clearance, data classification and the time and day of the access request.
  11. Most importantly, all of the above should be policy based which can be dynamically changed based on the need.
  12. Do all of the above seamlessly (or “automagically”).

In part 2 of my blog, I will discuss how Intel Cloud data privacy solutions (or the Cloud encryption / tokenization gateway) elegantly solves this problem and should be the only tool kit you will ever need in your arsenal to solve this issue.

In the meanwhile, you can check out information about our tokenization and cloud data privacy solutions here.

Intel Cloud Data Privacy/ Tokenization Solutions

Intel Cloud/ API resource center

I also encourage you to download the Intel Expressway Tokenization Broker Data Sheet:

 

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

Get the Straight Facts…API Manager Revealed

We are very excited to announce an Intel API management solution that was released today. The Intel® Expressway API Manager is a composite API platform.

Just creating outstanding APIs is not enough. Intel realized that you need to have a mechanism to communicate, explain, onboard, collaborate, and manage developers. Our API manager provides a composite solution that provides On-Premise and Cloud deployed API portals, and a mechanism to manage your APIs and help with developer on-boarding, registration, portal administration, content management system, community tools and developer enablement tools.

Initially I was going to write a blog about what we do best and how we are different. But I was amazed just looking at the amount of features we released in this version. So I am going to save the story and give you the straight facts below:

As part of our new solution, we provide the following:

  • Easily launch a secure website for API partners. Create an online portal, with your look and feel, for enrolling and supporting developer partners.
  • Take the hassle out of key management. Make key provisioning a snap, whether you’ve got a few partners or tens of thousands. Issue live keys, or require activation by a moderator.
  • Keep partners engaged with reports and tools. Show partner developers how many calls they’re making, which methods they’re using, and more.
  • Publish interactive API docs. Developers can execute calls directly from your API documentation.
  • Run a support forum and a developer blog. Foster an active developer community with full-featured forum and blogging tools.
  • Single Sign-on. Connect to your own identity store so partners don’t have to log in twice
  • B.Y.O.P.: Bring Your Own Portal. If you prefer, use Mashery’s API to plug in your own content management system (CMS)
  • Third-party Integration. Add outside services–such as billing engines–to your portal using Mashery’s API
  • Partner Management. Enable/disable keys & developer permissions
  • Your Branding. Use Javascript and CSS to completely obey your brand’s look-and-feel
  • Built in with Mashery. Never worry about installation or hosting.
  • Markdown Compatibility. Let forum users post formatted code samples using the popular Markdown syntax
  • Role-based Access Control. Create walled-off content for beta testers and other special partners
  • Comment Engine. Allow partners to post comments to your documentation
  • API Value Tracking. See how your API drives key performance indicators such as traffic, purchases, and registrations.
  • Detailed Activity Reports. View API usage and trends by developer, key, and method.
  • Mashery Reporting API. Access all reporting and chart data through an API.
  • Reports-only Role. Securely share reports with colleagues outside your API team.
  • Partner Monitoring. See all activity for a specific partner or app.
  • Latency Measurement. Track response times for your API service and for Mashery
  • Load Statistics. See average and peak loads by endpoint over time.
  • Data Export. Download reports in CSV format for use in Excel.
  • Custom Report Integration. Grab call logs and report data for use in third-party applications
  • Manage APIs as products. Tailor API access to suit the needs of your most important customer/partner segments.
  • Define API access plans. Create custom access plans (standard, premium, etc.) without any coding.
  • Get fine-grained control over resource packaging. Choose which API resources (methods) are included in each plan.
  • Create response filters. Strip out response content for a plan without coding.
  • Reduce work for IT. Let business-side execs securely package API access.
  • Maximize API value. Give business development, marketing, and product management teams the power to negotiate custom API access.

Guess what, built into this solution is a world class API gateway solution (refer to my performance numbers and security certifications blog on this) which includes RESTful service enablement, service orchestration, composition, provisioning, all authentication features, protocol and data format mediation, trust and threat processing, SLA management and API rate limiting.

Check out Intel® Expressway API Manager for more details. I am also doing a joint webinar with Mashery on Dec. 4, Secure, Expose, and Package APIs as Products. You can register here.

 

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

Announcing Intel(R) Expressway API Manager

We are announcing today the availability of a new product called Intel(R) Expressway API Manager, which we call a composite API platform. What we’ve done here is integrated the Expressway Service Gateway with the developer portal and developer management features from API management market leader Mashery!

Composite API Platform

The solution is a composite because it’s ideal for large enterprises who want a hardened security gateway on-premise but also want the cost savings of a SaaS cloud for developer registration, sign-up and management. Further, Mashery has the benefit of experience, as they have been ‘doing’ API management since about 2006; their product is highly mature and a great match for Expressway.

Both teams are very excited about the new offering. Let’s highlight some of the features:

  • It’s an Intel product sold and supported by Intel. We think this is important for Enterprises that want to make an investment in API management from a large vendor
  • Intel customers get access to all gateway features including: RESTful service enablement, service orchestration, composition, provisioning, all authentication features, protocol and data format mediation, trust and threat processing, SLA management and API rate limiting.
  • A new API console provided by Intel allows you to manage gateway services as APIs
  • Intel customers also get a subscription to the Mashery cloud for developer on-boarding, registration, portal administration, content management system, community tools and developer enablement tools
  • Mashery and the Intel Gateway are fully integrated for access control, basic policies and analytics, with more integration planned for the future. This means Intel customers can use Mashery for developer registration, key generation, and provisioning API rate limits

We wanted a solution that would address large Enterprise requirements which often require “on-prem” traffic processing using a certified gateway but still offer the advantages of a SaaS cloud for evangelizing APIs to internal or external developers or business partners.

Blake

 

Cost Effective PCI DSS Tokenization for Retail (Part II)

Welcome back, and thanks for continuing to read our blog series on reducing PCI Scope.  In our last blog we covered why reducing PCI Scope is so important.

Before we address common approaches to Tokenization, let’s recap what PCI DSS Tokenization is:

What is PCI DSS Tokenization?­­

PCI DSS Tokenization is a means for protecting credit card data by substituting a different, non-encrypted value for a credit card number.   Usually this takes the form of random number (with some of the first digits and ending digits preserved) that appears to back end systems to be a valid credit card number.

It is important that the random elements of the token (that is the digits that are not preserved) are not in any way derived from the actual credit card number.  [i] This randomized token is stored in a secure vault, which defines the mapping from the PAN to the token.

Common Approaches to Tokenization:

The first choice that merchants must make when embarking upon a path to scope reduction through tokenization is whether to build their own tokenization technology or to acquire it from a vendor (Intel/McAfee is of course such a vendor).

In the past merchants chose to build their own solutions as commercial PCI DSS Tokenization products are relatively new (with formal guidance from the PCI Counsel only appearing in late 2011).[ii]

In order to keep this blog post to a manageable length we will concentrate on the merchant data center rather than addressing both the point of sale locations and the data center in one post.   The same principles that apply to the data center are applicable to the retail locations as well. I invite those interested in a subsequent article on point of sale scope to contact me.

Figure 1 below shows reference architecture for retail data centers.   It does not represent any particular merchant’s data center, but rather represents a set of common components aligned in a typical architecture.     Virtually every retailer has an authorization path for data, a clearing path (that is usually a mainframe with batch processing of results in TLOG or similar formats), and provisions at the point of sale to process transactions in case the internet is not available at the time of the credit card charge.


 

Figure 2 represents the typical PCI DSS Scope associated with this architecture.   Please note that due to the offline requirement for processing credit cards (and therefore the need to store transactional data at the store server), typically most parts (if not all) of the retail locations are in PCI scope.

Figure 2:  Typical PCI DSS Scope at a Retailer’s Data Center

Three Approaches to PCI DSS Tokenization Architecture:

There are three common architectural approaches to PCI DSS Tokenization.

1)     Monolithic Approach

2)     API Tokenization (sometimes called Toolkit Tokenization)

3)     In-line Tokenization

The monolithic approach appeals to many as it promises to remove Points of Sale completely from PCI Scope and to bring the data center along for free.   This typically involves upgrading and standardizing equipment at the retail locations to allow encryption at the PIN Pad or Register.

Often the acquiring bank is able to decrypt the results ensuring that there is no obvious reason why actual cleartext PAN data may be needed within the enterprise.

This theory is very appealing as it implies it may be possible to remove the entire enterprise from PCI Scope and therefore drastically reduce compliance costs.

Peeling back the onion on Monolithic Scope Reduction:

There are several challenges associated with implementing monolithic scope reduction.   Some of them are listed below:

1)     Cost Versus Benefit

2)     Time to Value

3)     Fragility to Change

4)     Vendor and bank lock-in (what happens when it is time to renew the contract?)

Cost Versus Benefit:

As the value of monolithic scope reduction depends upon capturing the PAN data and protecting (encrypting or tokenizing) it at every retail device, all retail locations must be upgraded before real scope reduction is accomplished.

For a large retailer with thousands of locations this cost could easily mount to the tens of millions of dollars.     Often this does not compare favorably with the value achieved by reducing or eliminating PCI Scope.

Time to Value:

Most large retailers upgrade retail locations on a rolling basis (often on a 5 year or more schedule).   Deviating from this schedule is often very difficult.

Therefore, if each retail location must be upgraded (perhaps with new POS terminals and potentially with new software and registers) before significant business value is achieved, then expenditures begin on day one and benefits only begin to accrue at the end of a complete retail refresh cycle (usually measured in years).

Fragility to Change:

The monolithic scope reduction approach depends upon the assumption that the only places that PAN data enters the retailer are controlled directly by the retailer or the data protection vendor and that the only services that must consume the protected PAN data are in direct partnership with the data protection vendor (i.e. The Acquiring Bank has a partnership with the tokenization or encryption vendor such that it is able to derive cleartext PAN data from the protected PAN).

Therefore, no PAN data can make its way into the system in a way that the data protection vendor cannot control and no cleartext PAN data will ever be needed within the merchant’s data centers or retail locations.

Sadly, this is often not the case.  Vectors of PAN data into the enterprise can often be sanitized using custom software development (at additional cost and project risk), but the data egress path is often much more difficult to solve.

Imagine that all of the input PAN data is tokenized at the point of sale or other entry into the system (including CRM input, FAX/OCR input, Web Site input, etc.).    Now, the back end IT systems need to call out to a fraud detection service to ensure they should complete the transaction.

Commonly these fraud detection services do not accept encrypted or tokenized PAN data, but rather depend upon actual PAN data.   Now the enterprise openly needs PAN data in the data center to satisfy this new requirement.   PCI Scope is back in the data center.   Depending upon where in the architecture PAN data is actually needed, the entire data center may re-enter PCI scope.

Perhaps the bigger risk to change is in the case of a merger or acquisition.   Resolving one or more monolithic tokenization strategies with heterogeneous data centers and retail locations makes this problem nearly intractable.

Vendor and Bank Lock-In:

For monolithic scope reduction to function properly there needs to be close cooperation among at least the following entities:

1)     The Point of Sale Terminal vendor

2)     The tokenization or encryption software vendor

3)     The merchant

4)     The Acquiring Bank(s)

5)     The vendor(s) utilized for credit card authorization.

If any partnership among any of these critical participants breaks down, the entire system may fall apart.    What if the tokenization software vendor falls out with the point of sale vendor and the combined solution is no longer supported?    What if the merchant wishes to change Acquirer?

This fragility to change has the potential to put a merchant at a large disadvantage during negotiations with any of these partners for new equipment or contract extensions.

Tokenizing at the Data Center:

One of the more common solutions to the difficulties associated with monolithic tokenization is to tokenize at the data center.    This does leave the retail locations within scope, but can bring immediate value to the merchant rather than marginal value over a very large time frame associated with the Monolithic approach.   Often it is much less expensive up front (as there is no retail hardware and software refresh mandated).

There are two common approaches to tokenizing in the data center:

1)     API Tokenization

2)     In-lineTokenization

API Tokenization Explained:

API tokenization modifies existing systems to explicitly request token data from a tokenization vault when PAN data first enters the system.  This is typically implemented with a custom SDK or over SOAP based web services.  The system would reverse the process before outbound interface to payment gateway or to a payment processor.

Often a proxy server can be used on the front end of the web site to ensure that PAN data is tokenized before entry into Web Servers and that PAN data does not exist in at least some of the e-commerce server infrastructure.

For comparison purposes the architecture is unchanged from the previous example (including the fraud detection web service which requires PAN data).

Figure 3 shows typical PCI Scope for this retail architecture with API tokenization.   The e-commerce engine, the authorization engine and the clearing engine have been modified in order to explicitly request tokens and PANs as necessary.   Since all of them must have access to PANs (both for input and output for the clearing engine and authorization engine and for connection to the fraud detection web service in the case of the e-commerce engine) they are all within PCI Scope.

The scope reduction is small (the Web Servers and perhaps the E-Commerce engine may be removed from scope if there is no need for actual PAN data inside its bounds) as compared with the cost of the tokenization solution and the cost of modifying existing systems to explicitly request PANS and tokens. The downside of API tokenization is that each application that wishes to tokenize or de-tokenize data must be modified, typically with an SDK, which can mean additional development costs for the merchant.

In-Line Tokenization:

An alternative is to take a similar approach to tokenization at the data center with one subtle (yet very powerful) change.    What if instead of modifying existing systems to implement tokenization and detokenization, we simply add in-line proxies to the data flows that accomplish this same effect?

All back end systems still believe they are operating on PAN data, but actually work on tokens.   Data on the way into the data center is tokenized by the secure, high speed proxy and data on the way out of the data center (where PAN data is actually required) is detokenized by the same high speed proxy.

All of the back end data center systems fall from scope immediately.

Please see Figure 4 for an illustration of how In-Line tokenization reduces PCI DSS Scope.

Please note, this same proxy that tokenizes and detokenizes the data often decrypts the PAN data.  The proxy therefore can participate in PCI Scope reduction strategies at the retailer that involve encryption (symmetric or asymmetric) or other forms of tokenization at the point of sale.

This approach has many advantages over Monolithic tokenization:

1)     Much faster time to value.   In-line tokenization can be implemented in months rather than years and immediately begin reducing PCI Scope.

2)     Much less expensive:  No need to upgrade anything at the point of sale.

3)     Much more resilient to change:    There are no dependencies upon hardware or software at the point of sale or on specific financial partners (acquiring banks or payment gateways for example).

In-line tokenization is also superior to API tokenization:

1)     Much more scope reduction in the data center.

2)     Existing systems do not need to be modified as they do not change with In-line Tokenization.

3)     Much shorter time to value as it is much quicker to put an inline tokenization solution in place than to modify existing, business critical systems.

4)     Much less risk of breaking existing systems in the pursuit of saving money.

5)     Often In-line Tokenization solutions are much less expensive (even in terms of lower licensing costs) than comparable API tokenization solutions.

The same proxy that is used to tokenize and detokenize data can also transform data to other formats and other protocols.    This can be very important as during a merger, the proxy layer can make newly acquired stores appear to be the same as existing stores to the enterprise to the data center.   This same technology can make the data centers of the new owner appear to be the same as the old data centers from the newly acquired retailer.    This allows for very quick integration of newly acquired assets and for realizing economies of scale much more quickly after a merger.

In-line tokenization can also be used with existing bespoke tokenization solutions.   Instead of having the proxy make a tokenization/detokenization request to its internal service, the proxy broker can simply make a call out to an external service to perform these activities.

This approach of utilizing the proxy to call out to non-native tokenization can even be used with API tokenization products to give them the same advantages of native In-line tokenization (except perhaps the smaller price tag).

Conclusion:

There are two primary reasons most merchants evaluate PCI DSS tokenization options.

1)     To reduce the cost of PCI DSS compliance (as cost is directly related to scope)

2)     To decrease the risk of a data breach.

Given these constraints, we believe that the best option is often to begin at the data center where there is the most value gained with the least effort and then utilize this effort to inform the decision of how best to secure the points of sale.

I encourage you to read additional papers on the subject. At  cloudsecurity.intel.com , please consider downloading this popular paper describing how a service gateway can help reduce PCI DSS Scope.

Alternatively, you may enjoy watching Webinar: 3 Core PCI DSS Tokenization Models – Choosing the Right PCI DSS Strategy

Please reach out to me with any questions regarding that arise in your pursuit of a solution for reducing PCI Scope in your organization.

Next I will cover expanding the in-line tokenization concept to general in-line data protection for securing other forms of data including Personally Identifiable Information (PII) and Personal Health Information (PHI).

  Tom Burns serves in Intel’s Data Center Software group where he works with many of the world’s top retailers to help increase security and reduce PCI DSS Scope. Tom joined Intel in 2008 and holds a BSEE from Purdue University.

 

[i] Information Supplement: PCI DSS Tokenization Guidelines: PCI Counsel, August 2011 section 4.1

[ii] Information Supplement: PCI DSS Tokenization Guidelines: PCI Counsel, August 2011

Who is more sensitive – you or your data?

Sooner or later the following (not so hypothetical quandary) will undoubtedly arise: When moving your data to the cloud, you’ll be faced with an array of decisions that will need to be made. What considerations will you make for the protection of your data? In the not-so-distant past, you most likely invested a lot of time and resources into building “enterprise Ft. Knox” – a state-of-the-art, highly advanced and very expensive solution replete with several sophisticated gadgets, strategically positioned around the enterprise perimeter. You had a moment to breathe a sigh of relief, taking solace in knowing that no one could penetrate the fortress you built. You even went so far as to give yourself a pat on the shoulder, enjoying the moment.

Alas, the respite ended with a tap on the shoulder! The King, also known as the CIO, has informed you that the rules have changed! Apparently, when you were working hard building this impenetrable boundary around the edge fixing the exposure, he made a deal for the kingdom (in this case, your company), that expanded its territory. As a result, the short but life-changing edict is to move processing to a third-world country (in other words to the cloud). Gulp.

Medieval comparisons aside, the matter of fact is that your IT systems have been moved to the cloud – public, private, or hosted. With the stroke of a quill (or pen), the circumscribed limits of your perimeter have changed. Unfortunately, protecting your databases, processes, applications, app servers, web servers, systems, middleware, and back-end systems won’t work anymore, and as in most similar scenarios, you’ll have absolutely no control over them in a cloud environment. It’s highly likely that you won’t even know where things are even running most of the time.

The advantages of moving to the cloud cannot be denied, but the new paradigm-shift is not without headaches and real concerns that come with data privacy, security, auditing, compliance, residency (at certain times you can’t let the data leave certain countries for example), in addition to having to worry about being exposed to hackers on a 24×7 basis.

Now what? Well, there is an easy way to solve this problem. Instead of protecting all of the above, you can simply just protect your data instead. This is exactly where Intel cloud encryption/ data privacy gateways shine. We created these gateways a few years ago, keeping the ever-changing landscape in mind.

So how do we do it? Well, for starters, the Intel cloud encryption gateway is the ONLY solution that is available in multiple form factors – as an appliance, software and virtual. It can also be available as a hosted solution, through our partners if you should choose that option. Our appliances are not “virtual appliances” unlike competing vendors in the market. We provide a “true” appliance. This is imperative in the security field, especially when you need FIPS 140-2 Level 3 compliance in the government (or other highly secure environments like the healthcare) space. (As a side note, I recently read a competitors spec where that company claimed to “enable” you — so you could plug in and use FIPS 140-2 if needed. It’s not certain what they exactly meant or how to parse the finely nuanced language used in their advertisements. In contrast, we are completely straight-forward about our enterprise class capabilities. And, yes, we have that feature built in already.)

In addition, our appliance has a unique set of features that include: tokenization, encryption, Format Preserving Encryption (FPE), as well as others that will help ensure the authenticity, integrity, and validity of your data. That’s not all. What makes us unique is that our cloud encryption gateways are built to fit your current eco-system. This means that regardless of the protocol, identity system, logging system, monitoring system, or data/message type, we can encrypt/tokenize the data that is flowing in and out of your organization.

Let’s think about that for a second. You get these appliances, drop them in the line of traffic, do a few configurations, and you are done. Either you keep the sensitive data and send the tokens to the cloud, or alternatively, send the protected (encrypted) data to the cloud and keep the keys to yourself. This allows you to be compliant and mitigate your risk. There are no more long drawn-out IT engagements, nor nightmare filled sleepless nights trying to figure out what will happen when moving your sensitive data to the cloud.

This is really important where time to market (TTM) is the key. We can have you up and running and poised for being production-ready, in a matter of days (or even in a matter of hours as most cases call for). When making a decision, It’s also essential for your calculus to include ROI and TCO. When you buy a similar solution from someone else, make sure to ask yourself these questions: Will I have to spend hundreds of hours building this? How long will it take me to integrate this within my eco-system? We can get you connected with most existing enterprise systems such as logging, monitoring, auditing, middleware, identity systems, database, (web) services, and SIEM systems such as Arcsight/Nitro quickly. And you get the added advantage of having mobile enablement already built-in.

There’s one last note of chuckle I want to share. I saw a competitor’s blog suggesting that they are rated by Gartner, for tokenization and encryption gateways, and are rated “close enough”, to Intel & McAfee in this area. I just want to close this out by saying we are Intel-McAfee, and we thankfully don’t feel compelled to make similar associations with someone, just to bolster our viability or engender notions of greater stability. We genuinely care for our customers and know that we will be here for many years to come.

I encourage everyone to download a very topical paper on the subject regarding Intel Expressway Tokenization Broker:

 

Please contact me if you need more information. I’m more than happy to send you any additional information that you may need.

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai