Recently a colleague and a friend of mine wrote a great article about different ways to be PCI 2.0 compliant by tokenizing PAN data. If in case you missed it I want to draw your attention to it.
Essentially, if you are looking to be PCI-DSS 2.0 compliant there are few ways you can achieve that. The most painful would be obviously a rip-and-replace strategy and the easiest would be to do it in an incremental, less intrusive method.
First approach, the Monolithic big bang approach, is the legacy way of doing things. Once you figure out the areas of your system that are non-compliant (that is either storing PAN data –encrypted or not, or processing PAN in clear), you decide whether you need that component to be PCI compliant. As the PCI audit is very extensive, time consuming and very methodical, in which every process, application, storage, database, and system will be looked at and thereby it becomes very expensive. Once you figure out which components need to be PCI compliant you can do the rip and replace approach in which you will touch every system component that needs to be modified and rewrite the system to become compliant. This might involve touching every component and change your entire architecture. This essentially will be the most expensive, painful and the slowest before you can be compliant. While this can be the most effective for spot solutions, this could be an issue if you have to do this every time when the PCI-DSS needs change (which seems to be every year).
Second approach, API/SDK based tokenization is much more effective. In this case, you can retrofit applications, processes, systems, databases, etc. by making those components call an API (or SDK) which will convert the PAN data and return a token which can be used to replace the original PAN data. This requires you to do minimally invasive procedures. While this doesn’t require you to change your entire architecture/ system it still requires you to touch all those components that need to be compliant. Effectively this method is a lot faster and quicker to the market, while also giving you an opportunity to change quickly when the PCI-DSS needs change.
The third approach is called a gateway approach. In this you essentially monitor the traffic between components and tokenize/ de-tokenize the data in transit. This is also known as in-line tokenization. This method is effectively the cheapest, and the quickest to the market. But the biggest advantage is that your changes to your existing systems will be very minimal to nil. Essentially, you make the PAN data flow through the gateway which will take care of converting the PAN data to tokens before it hits your systems. Imagine the painful exercise when you have to make your Mainframe and legacy systems compliant by having them deal with tokenized data if you were to re-code those legacy systems. This method will essentially eliminate that.
You can read his entire article here.
Cost Effective PCI DSS Tokenization for Retail (Part I)
Cost Effective PCI DSS Tokenization for Retail (Part II)
Also, don’t forget to check out our tokenization buyer’s guide here.
Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel
Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.