Context Aware Data Privacy (part II)

If you missed my Part 1 of this article, you can read it here when you get a chance (link).

As a continuation to part 1, where I discussed the issues with Data Protection, we will explore how to solve some of those issues in this article.

People tend to forget that hackers are attacking your systems for one reason only –  DATA. You can spin that any way you want, but at the end of the day, they are not attacking your systems to see how you configured your workflow or how efficiently you processedyour orders. They could care less. They are looking for the golden nuggets of information that either they can either resell or use to gain some other kind of monetary advantage. Your files, databases, data in transit, storage data, archived data, etc. are all vulnerable and will be of value to the hacker.

Gone are the old days when someone was sitting in mom’s basement and hacking into US military systems to boast about their ability amongst a small group of friends. Remember Wargames,  the movie?  Modern day hackers are very sophisticated, well-funded, often in for-profit organizations, and are backed by either big organized cyber gangs or by other entities within their respective organizations.

So you need to protect your data at rest (regardless of how the old data is – as a matter of fact, the older the data, the chances are, they are less protected), data in motion (going from somewhere to somewhere – whether it is between processes, services, between enterprises, or into/from the cloud or to storage), data in process/usage. You need to protect your data with your life.

Let us closely examine the things I said in my last blog (Part 1 of this blog), the things that are a must for a cloud data privacy solution.

More importantly, let us examine the elegance of our data privacy gateways (code named: Intel ETB – Expressway Tokenization Broker) that can help you with this costly, scary, mind-numbing experience go easily and smoothly. Here are the following elements that are embedded in our solution that are going to make your problem go away sooner.

1.       Security of your sensitive message processing device

As they say, Caesar’s wife must be above suspicion (did you know Caesar divorced his wife in 62 BC). What is the point of having a security device that inspects your crucial traffic, if it can’t be trusted? You need to put in a solution/devices where a vendor  can make assertions regarding security and have the necessary certifications  to back up those claims. This means that a third party validation agency should have tested the solution and certified it to be ‘kosher enough’ for an enterprise, data center or cloud location. The certification must include FIPS 140-2 Level 3, CC EAL 4+, DoD PKI, STIG vulnerability tested, NIST SP 800-21, and support for HSM, etc. The validation must come from recognized authorities, not just from the vendor.

2.       Support for multiple protocols

When you are looking to protect your data, it is imperative that you choose a solution that not only can handle the HTTP/ HTTPS/ SOAP, JSON, AJAX and REST protocols. In addition, you need to consider whether the solution supports all standard protocols known to the enterprise/cloud, with “Legacy” protocols such as JMS, MQ, EMS, FTP, TCP/IP (and secure versions of all of the above) and JDBC. More importantly, you also need to determine whether the solution can speak industry standard protocols natively such as SWIFT, ACORD, FIX, HL-7, MLLP, etc. You also need to look at whether or not the solution has the capability of supporting  other custom protocols that you might have. The solution you are looking at should give you the flexibility of inspecting your ingress and egress traffic regardless of how your traffic flows.

3.       Able to read into anything

This is an interesting concept. I was listening to one of our competitor’s webcasts… there was complete silence when what appeared to be a dreaded question, was asked of the person speaking on behalf of that company: “How do you help me protect  a specific format of data that I use in transactions with a partner?”Without hesitation, the presenter answered the question by  suggesting their solution lacked support for it. While I’m not trying to be unnecessarily abrasive, the point is that you should have the capability to be able to look into any format of data that is flowing into, or out of, your system when the necessity arises. This means that you should be able to inspect not only XML, SOAP, JSON, and other modern format messages. A solution should be able to retrofit your existing legacy systems to provide the same level of support. Message formats such as COBOL (oh yes, we will be doing a Y10K on this all-right), ASCII, Binary, EBCDIC, and other unstructured data streams that are of equal importance. Sprinkle in the industry format messages such as SWIFT, NACHA, HIPAA, HL7, EDI, ACORD, EDIFACT, FIX, FpML to make the scenario interesting. But don’t forget our good old messages that can be sent in conventional ways such as MS Word, MS Excel, PDF, PostScript and good old HTML, etc. You need a solution that can look into any of these data types and help you protect the data in those messages seamlessly.

4.       Have an option to sense not only the sensitive nature of the message, but who is requesting it and on what context and from where

This is where we started our discussion. Essentially, you should be able to not only identify data that is sensitive,  but take necessary actions based on the context. Intention, or heuristics, are a lot more important than just sensing something that is going out, or in. So this essentially means you should be able to sense who is accessing what, when, from where, and more importantly from what device. Once you identify that, you should be able to able to determine how you may want to protect that data. For example, if a person is accessing specific data from a laptop from within the corporate network, you can let the data go with the transport security, assuming he has enough rights to access that data. But if the same person is trying to access the same data using a mobile device, you can tokenize the data and send only the token to the mobile device. (This allows you to solve the problem where location is unknown as well. ) All conditions being the same, the tokenization will occur based on a policy that senses that the request came from a mobile device.

5.       Have an option to dynamically tokenize, encrypt, format preserve the encryption based on the need

This will allow you to be flexible to encrypt certain messages/ fields, tokenize certain messages/ fields or employ FPE on certain messages. While you are at it, don’t forget to read my blog on why Intel’s implementation of the FPE variation is one of strongest in the industry here.

6.       Support the strongest possible algorithms to encrypt, storage, and use the most random possible random number for tokenization

Not only should you verify the solution has strong encryption algorithm options available out of the box (such as AES-256, SHA 256, etc.), but you should also ensure that the solutions delivers cutting edge security options when they become available – including support for the latest security updates.

7.       Protect the encryption keys with your life. There is no point in encrypting the data, yet giving away the “Keys to the Kingdom” easily

Now this is the most important point of all. If there is one thing you take away from this article let this be it: When you are looking at solutions, make sure that not only that a solution is strong on all of the above points, but most importantly, ensure that you  protect the proverbial keys with your life. This means the key storage should be encrypted, and  should be capable of having: an SOD (separation of duties), key encrypting keys, strong key management options, key rotation, re-key options when the keys need to be rotated/expired or lost, key protection, key lifetime management, key expiration notifications, etc. In addition, you also need to explore if there is an option to integrate with your existing key manager in house such as RSA DPM (the last thing you need is to disrupt the existing infrastructure by introducing a newer technology).

8.       Encrypt the message while preserving the format so it won’t break the backend systems

This is really important if you want to do the tokenization or encryption on the fly without the backend or connected client applications knowing about it. When you encrypt the data and  preserve its format, it will not only look and feel the same as the original data, but the receiving party won’t be able to tell the difference.

If you are wondering Intel comes into the picture in this area, we address of all of the discussion points mentioned in #1 to #8, with our Intel Cloud data privacy solution (a.k.a. Intel ETB – Expressway Token Broker) and a lot more. Every single standard that is mentioned in here  is supported, and we are working on adding the newer, better standards as they come along.

Check out information about our tokenization and cloud data privacy solutions here.

Intel Cloud Data Privacy/ Tokenization Solutions

Intel Cloud/ API resource center

I also encourage you to download the Intel Expressway Tokenization Broker Data Sheet:

 

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

 

Content / Context / Device Aware Cloud Data Protection

In this two-part blog, I am going to talk about the Intel Cloud Data protection solution that helps our customers utilize their data, in both a context and content-aware manner.

This is a newer set of technologies that has hit the market in the last few years. In the past, we used to think just encrypting the transport layer (such as TLS/SSL) was good enough. Given the complex nature of services and API composition, we quickly realized that it was not enough. Then we moved to protect the messages (most of the time,  the entire message), or at a field level to protect the specific sensitive fields. The problem with any of these scenarios was that it was somewhat static in nature; somewhere there was a definition of what “sensitive data” is, and details related to strict protection of that data. However, when there is a real need to send sensitive data out and a need to protect that, making sure only the authenticated party can receive and/or use the message is critical.

Content Context Device Aware Cloud Data Protection

Essentially “Content/Context Aware” data protection is data protection on steroids. Remember in prior years when we used the DLP technologies, identified data leakage/ data loss based on certain policies/ parameters and stopped the data loss but did nothing about it? The problem with DLP is that it is passive in most cases. It identifies sensitive data based on some context/policy combination and then blocks the transaction. While this can work for rigid enterprise policy sets, this may not work for cloud environments where you need these policies to be flexible. The issue with that is when someone really needs to have that data (who is authorized for it), it is unacceptable to have the transactions stopped.

What if there were a way to provide data protection which would be identity aware, location aware, invocation aware — and yet, would be policy based, compliance based, and more importantly, very dynamic? In other words, what if you were to provide data protection based on content and context awareness? Gone are the days in which you ensure that your systems are compliant, and you are done. Read my blog on why getting compliant is not enough anymore. (link here). That is because your data is NOT staying within your compliant enterprise Ft. Knox anymore; it is moving around. Getting your systems compliant, risk averse and secure, is just not good enough as your data is moving through other eco-systems, not just yours.

When you move your data through cloud providers (especially public cloud) and add removable devices (mobility) to the mix, the issue gets even more interesting. Sprinkle data residency issues on top of that to spice it up.

First of all, take a look at your cloud provider contract closely if you haven’t done so already.

  1. Are there any guarantees on where the data is stored (in other words, the location of the data residency)?
  2. Are there any guarantees on where the data will be processed (or the location of data processing)?
  3. Are they willing to share the liability with you if they lose your or your customer’s data?

Yes, some providers are better than others, but I have seen some other contracts, that give me heart palpitations. No wonder companies are scared to death about protecting their data when moving to the cloud!

The data residency issues are especially big for some of our European customers. This is certainly true for multi-country services, where one has to restrict data residency for data at rest,  but also where mandates exist for where data can be processed. Imagine when you are dealing with financial, healthcare and other sensitive data for a specific country and they ask that you not only store that data in a place that is within legal boundaries of that country, but also ask that you process the data within the data centers located in their country as well.  You are faced with yet additional requirements including a need to sanitize data, route messages to services located in a specific place, desensitize the data for processing, and sanitize it again for storage.

Essentially, your solution needs to be:

  1. Have a strong encryption engine which has all the possible security certifications that you can think of – such as FIPS 140-2 Level 3, DoD PKI, CC EAL 4+, etc.
  2. Use very strong encryption standards/ algorithm for data, whether in storage or in transit.
  3. Protect the encryption keys with your life. There is no point in encrypting the data yet giving away the “Keys to the Kingdom” easily.
  4. Have a solution that can sanitize the data very dynamically and very granularly, based on either pre-defined policies (such as XACML, etc.) or DLP based.
  5. Make a decision based on the content/context and protect the data based on the need. This means having the flexibility to encrypt the entire message, specific sensitive data in the message, have an option to preserve the format of the sensitive data of the message and/or tokenize the data based on the need.
  6. Encrypt the message while preserving the format, so it won’t break the backend systems.
  7. Tokenize the PCI and/or PII data for compliance and security reasons.
  8. Scrutinize the message more deeply if the message is intended to go to a non-secure location/ endpoint – such as mobile devices, cloud location, third world country, etc.
  9. Comply with data residency issues by mandating the processing and storage of data in to a specific instance of the service based on where it is located.
  10. Have an elaborate access-control mechanism to the data based on user/ application clearance, data classification and the time and day of the access request.
  11. Most importantly, all of the above should be policy based which can be dynamically changed based on the need.
  12. Do all of the above seamlessly (or “automagically”).

In part 2 of my blog, I will discuss how Intel Cloud data privacy solutions (or the Cloud encryption / tokenization gateway) elegantly solves this problem and should be the only tool kit you will ever need in your arsenal to solve this issue.

In the meanwhile, you can check out information about our tokenization and cloud data privacy solutions here.

Intel Cloud Data Privacy/ Tokenization Solutions

Intel Cloud/ API resource center

I also encourage you to download the Intel Expressway Tokenization Broker Data Sheet:

 

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

Gunnar Peterson on Understanding Cloud Security Standards, part 3

Moving applications to the Cloud puts many enterprises in an accustomed position, the technology and processes that their business depends on aren’t under their sole control, but rather a mix of responsibilities. The move to the Cloud is not a simple “forklift” migration where bits are copied to a Cloud Provider, instead the architecture and assumptions must be reviewed and refreshed to meet the needs and constraints of Cloud systems.

Implementing authorization services with standards like XACML empowers the security architect to enforce policy via a Gateway and answer the authorization queries from the source with the freshest and most specific data. Often the information needed to resolve authorization requests is stored beyond the directory and only available in a database or other repository.

The Cloud presents real integration challenges to the enterprise, what Gartner calls Cloudstreams and Cloud Service Brokerages focus on “integration, governance, and security impact points.”

In Part 1, we examined four Anti-Patterns that enterprises should avoid as they move the Cloud. These four Anti-Patterns are at the heart of dealing with the “Complexity Kills” problem that Gartner’s research shows as a recurring theme in Cloud migrations.

Anti-Pattern Description Mitigations
Low/No Access Control “we’ll see if it works and then turn on security later” Strong access control protocols for authentication and authorization
Replicating User Accounts copying in full or an extract your Enterprise directory to the Cloud Provider Retain enterprise provisioning on Cloud Consumer side
Copying Credentials Copying Enterprise Access Credentials to Cloud based services Implement Federated Identity
“Trusted” Proxy Gateway lacks support security services and standards Implement improved access control, audit logging and monitoring on the Gateway

In Part 2, we looked at how open standards like SAML, Oauth, and OpenID can be used to mitigate the Anti-Patterns, when it comes to fine grained authorization and Attribute based Access Control that many Cloud applications require, standards like these are necessary but not sufficient for the overall identity architecture.

The old enterprise perimeter was based on network firewalls, but today applications are integrated, distributed via Cloud and consumed via Mobile apps. The network firewall is severely limited in this context. Fine grained authorization and Attribute Based Access Control help close out the gaps in Cloud Security by providing a Dynamic Perimeter that manages access control across these contexts.

Today’s reality is that users, systems and data are distributed. The genie is not going to be put back in the box, but access control policy enforcement can and should be centralized.

Centralizing access control policy enforcement is essential for:

  • For Security architects to understand the boundaries in the system,
  • For developers to know what and where to code for authorization operations
  • For auditors to be able to review
  • For testers to be able to identify vulnerabilities

Gateways are ideal for providing the Policy Enforcement Point function, to intercept requests before they reach the resource and ensure the request is authorized.

The trend line  in access control points to more fine grained access control and to have authorization decisions be policy based (rather than hard coded).

 

 

The four Anti-Patterns that we discussed show why trends continue in the direction of increased granularity and policy based access control.

Low/no access control“we’ll see if it works and then turn on security later”

Access control is too important to be left up to developer discretion. Authorization and access control should be configured in policy, not hard coded. Externalizing the application’s authorization gives the enterprise several important advantages, including flexibility to route authorization requests to the system that has the most specific and freshest information.

Replicating user accounts – copying in full or an extract your Enterprise directory to the Cloud provider

XACML separates the Policy Enforcement Point (PEP: which protects the app) from the Policy Decision Point (PDP: which has the information to grant or deny the authorization request). This logical separation enables the enterprise to deploy its PEP on the Cloud Provider side to implement authorization enforcement while routing requests to PDP’s with the freshest and most specific attributes to answer the authorization request.

Separating the PEP and PDP means that the Gateway can intercept the request to the resource, route the request to the system with the freshest and most specific information, and enforce the policy. This pattern allows for a flexible, best of breed authorization architecture with the PEP and PDP tuned to control the authorization workflow. The PEP is responsible to enforce the chain of responsibilities in authorization and the PDP carries out the responsibility via querying data sources to grant or deny access.  Note, the information needed to make the grant or deny access may cross from Cloud Provider to enterprise Cloud.

Copying credentials – sometimes Enterprise copy credentials to Cloud based services; and thereby create a new pool of identity risk to manage.

Separating the PEP and PDP eliminates the need to hard code individual credentials to resolve access control challenges. This is because the PEP queries the PDP on behalf of the user to verify user’s attributes against the authorization target including the Resource and Action requested.

“Trusted” proxy – where trust is in name only

Trust, but verify means auditability. When authorization logic is strewn across millions of lines of code, auditing is impossible. Auditable systems must have authorization rules and logic that are clear and straightforward to review. Pulling key authorization policies out of the code and into XACML policies allows the Auditor to assess the target and ensure it meets the system owners’ goals.

Gunnar Peterson is a Managing Principal at Arctec Group. He is focused on distributed systems security for large mission critical financial, financial exchanges, healthcare, manufacturer, and federal/Gov systems, as well as emerging start ups. Mr. Peterson is an internationally recognized software security expert, frequently published, an Associate Editor for IEEE Security & Privacy Journal on Building Security In, an Associate Editor for Information Security Bulletin, a contributor to the SEI and DHS Build Security In portal on software security, and an in-demand speaker at security conferences. He blogs at http://1raindrop.typepad.com.

Follow

Get every new post delivered to your Inbox.

Join 139 other followers