Tokenization: Ready for Prime Time

The digital transformation has changed how the world does business. It has created whole new enterprises and industries, but it has also left many organizations vulnerable to new and destructive threats. Digital transformation can and does deliver increased efficiencies, improved decision-making, lower costs, improved reach, and higher profits. But it also frequently relies on increasing amounts of personal and other sensitive data. As the volume of sensitive data grows, so, too, does the threat of theft. For cybercriminals today, personal information residing on the web is like cash.

According to CIO:

Personal information is the currency of the underground economy. It’s literally what cybercriminals trade in. Hackers who obtain this data can sell it to a variety of buyers, including identity thieves, organized crime rings, spammers and botnet operators, who use the data to make even more money.

Individuals and businesses need to combat these threats to secure their own assets and reputations as well as to comply with increasingly numerous and rigorous regulations, such as the EU’s now active General Data Protection Regulation (GDPR).

There are valuable tools to help fight digital security threats, but which tools to use and how to deploy them depend on the type of data and the use case, or application. For example, using a customer’s data to purchase goods from a merchant is different from using a customer’s data to identify a customer in a loyalty program or to provide health care services.


Among the tools that protect the data itself by making it unusable should it be stolen is tokenization. Tokenization protects sensitive data by substituting non-sensitive data. It creates an unrecognizable tokenized form of the data that maintains the format of the source data. For example, a national identity number of nine digits (123-45-6789) when tokenized (e.g. 275-47-5296) looks similar to the original number and can be used in many operations that call for data in that format without the risk of linking it to the number-holder’s personal information. The tokenized data can also be stored in the same size and format as the original data. So storing the tokenized data requires no changes in database schema or process.

Tokenization: Ready for Prime Time

Historically, the protection of credit and debit card numbers, both for payment as well as non-payment processes, has been the preeminent use case for tokenization, but the largest opportunity going forward is the general protection of sensitive data.

Tokens are increasingly being used to secure other types of sensitive personally identifiable information (PII), including social security numbers, telephone numbers, email addresses, account numbers, as well as protected heathcare information (PHI). Many backend systems rely on Social Security numbers, passport numbers, and driver’s license numbers as unique identifiers. Since these unique identifiers are tied into the systems, it’s very difficult to remove them. And these identifiers are also used to access information for billing, order status, and customer service. Consequently, tokenization is now being used to protect PII from exposure to hackers while maintaining the functionality of backend systems.

Example Use Cases


Let’s take healthcare as an example. Throughout the process of providing medical services for a given patient, multiple people will come into contact with the patient’s records, from physicians, to nurses, to aides, to insurance providers, etc. This individual’s PHI needs to be protected at all times and revealed to only those authorized to have full access.

Financial Services

In financial services, anti-money laundering initiatives require analyzing data while maintaining privacy and security for PII, but also being able to securely return to the original data for post-analysis follow-up. These records not only need to be protected, they also need to be shared across national boundaries, which is regulated by data residency laws.

Human Resources

HR applications, such as accounting and benefits, generally require PII, such as national identification and telephone numbers, street and e-mail addresses, etc. Similarly, automated batch jobs, such as payroll deposits and retirement plan contributions are performed by HR applications using this kind of information. In all these cases, this data needs to be protected.

Unfortunately, providing this internal data security and complying with all the applicable laws and mandates is costly, and no organization has unlimited resources. So finding the right combination of effective protection that provides operational flexibility at a reasonable cost is essential to an organization’s ability to successfully protect its data from breaches.

Encryption and Tokenization

Encryption has been the preferred way to protect sensitive data, and it’s still valid for the majority of use cases. However, for many cases, even beyond the payments world, the substitution technique of tokenization is a cost-effective way to protect and safeguard sensitive information. And, while encryption usually changes the format of the data, which can be an advantage, tokenization, as I noted above, does not. So tokenized data can be used in place of the original data without destroying the database schema. This is a great advantage if your organization wants to stay with applications and systems already in place.

To learn more about Thales eSecurity’s Vormetric Tokenization with Dynamic Data Masking, check the link.

The post Tokenization: Ready for Prime Time appeared first on Data Security Blog | Thales e-Security.

*** This is a Security Bloggers Network syndicated blog from Data Security Blog | Thales eSecurity authored by Mark Royle. Read the original post at:

Cloud Workload Resilience PulseMeter

Step 1 of 8

How do you define cloud resiliency for cloud workloads? (Select 3)(Required)