Tokenization for De-Identifying APIs

De-identifying Data in APIs

I was catching up on my RSS feeds over the weekend, reading all the things I missed while I was at IDF, when I saw this great post from Kin Lane calling for “A Masking, Scrubbing, Anonymizing API“.  It reminded me of a conversation I had at IDF about Kaggle, which is a platform for crowdsourcing solutions to big data problems.  In both cases, the goal is to surface data in a way that protects personal information.  It got me thinking about how compliance intersects with API strategies.  With APIs being a universal tunnel into the enterprise, it’s important not to neglect security compliance in API content!  Fortunately, a Tokenization proxy or API Manager can be used to address these types of usage models.

Tokenization vs  Encryption vs Redaction

Tokenization is the process of replacing a string with another randomized string.  Expressway Tokenization Broker can perform this operation as a proxy for any API response, storing the PII in a secure vault.  The only way to recover the original data is through a detokenization routine performed by a system with access to the secure vault.  This is somewhat similar to the mechanism Kin describes (replacing actual values with fake values), except that the tokens are not likely to be human-readable (i.e. instead of replacing Kin Lane with John Doe it might wind up reading zAe N8fc).  On the other hand, tokenization preserves correlation – if you replace every instance of any name with “John Doe” you may lose the ability to do associations across data sets.  The Retail industry has been using this mechanism for years, adopting the tokenization of Payment Account Numbers (PAN) as a best practice for PCI compliance.  We have recently seen adoption of this tokenization capability for other types of PII, particularly where there are compliance and audit concerns.

Tokenization Process

Tokenization of Payment Account Numbers for PCI Compliance

Format-Preserving Encryption (FPE) is another mechanism for de-identifying data.  It is available in all of our Expressway products.  In this case, the data is encrypted using ciphertext that conforms to the same formatting as the input data.  For example, the SSN 123-45-1234 might encrypt to 789-12-3456.  This ensures that the ciphertext will pass any downstream format checking that may occur.  However, unlike tokenization, FPE is reversible — it is possible to decrypt the ciphertext to plaintext without access to a tokenization vault.  This makes the ciphertext behave more like encrypted data, enabling applications to use a shared secret to decrypt the data independently from the secure vault.

Finally, data can be anonymized using redaction, which is also supported in all of our products.  This is the process of eliminating PII entirely rather than replacing it.  This is the most surefire mechanism for keeping PII out of the wrong hands, but it comes with a potential downfall:  it may prevent records from being associated with the same owner, particularly across data sets.  This correlation can be the most valuable opportunity in many types of big data analysis.

De-Identification Using the Façade API Proxy Pattern

We have seen customers take advantage of regular expressions to identify personally identifiable information (PII).  There are standard policies that can pick out Social Security Numbers, email addresses, and other common types of PII in any API.  Nonstandard types of PII can be detected as well, provided that they conform to a well-defined structure (generally alphanumeric with a fixed length, although other patterns can be identified as well).  Once the PII has been identified, the data can be de-identified using tokenization or encryption (including format-preserving encryption).  Or the data can be anonymized completely via redaction. This policy can be generalized to proxy several APIs and replace any PII that passes through.  This works particularly well for credit card or social security numbers, both of which follow a very well-defined and relatively unique pattern.

Anonymization policies can also be tailored to specific APIs that have well-defined schemas (along the lines of the Swagger example that Kin suggested), matching based on the JSON or XML field information.  For example, a colleague and I were playing with the idea of stashing employee information in DynamoDB.  An employee record might look like:

{ 
  "Name": { "S": "Sally Rockstar" }, 
  "Email": { "S": "sally@acme.corp.us" },
  "City": { "S": "Mountain View" },
  "State": { "S": "CA" },
  "Zip": { "N": "90210" },
  "DriversLic": { "S": "A1234567" },
  "SSN": { "S": "123-45-1234" },
  "CurrentSalary": { "N": "60000" } 
}

Within this data set, email, SSNs, Drivers License numbers, and Zip Codes follow well-established rules that lend themselves to regular expressions.  However, the zip code rule (5 digit number) could match the salary field.  Obviously you could enforce Zip+4 and decimal inputs (XXXXX-XXXX for Zip Code, XXXXX.XX for CurrentSalary), but it would probably be safer to match the name rather than the value for this data set.

Another benefit of the anonymizing facade API pattern is that it can support conditional de-identification.  For example, I may want to allow the PII to be read within my network but have it de-identified for external clients.  Or I may want to tokenize internally but redact externally.  We can define a workflow that uses any of a number of factors to make the decision at API request time, allowing access to live data rather than a snapshot.

Summary

I’m excited about the potential for APIs to allow faster problem solving through crowdsourcing.  Kaggle looks like a very interesting platform for enabling this.  I’m also happy to see folks like Kin working to make government more open and accessible through the use of APIs.  API gateways can play a role in those transformations by sanitizing the data, reducing the risk of PII being compromised.  As Mark Silverberg pointed out in the comments on Kin’s blog, the safest way to protect PII is to scrub the data set before it goes out.  By using a tokenizing or encrypting proxy facade, the “scrubbing” is made internal, minimizing the risk of an escape.

As I noted above, our products are unique in the API management space, in that they support high-performance de-identification policies.  They also include powerful regular expression libraries that can be used to identify (and then de-identify) PII that is contained in an API response.  I did a webinar with John Kindervag recently that touched on many of these topics as well.  You can watch the replay to learn more, or try out FPE and redaction for yourself using Expressway API Manager on Amazon Web Services.

Travis Broughton

About Travis Broughton

Travis is an architect with Intel's Data Center Software Division. He has fifteen years of experience with Intel IT, working as an Enterprise Architect.

One Response to Tokenization for De-Identifying APIs

  1. In terms of de-identification there are products out there (ie: DMsuite) that automate this process. This saves coding as well as maintenance of the de-identification process. Some additional features to consider would be assisting you to find sensitive data on your databases, keeping referential integrity across all databases and files and consistently de-identifying data that is sourced from multiple geographic locations.