PII Anonymization
Some of your incoming prompts may include personally identifiable information (PII). With Prediction Guard’s PII anonymization feature, you can detect PII such as names, email addresses, phone numbers, credit card details, and country-specific ID numbers like SSNs, NHS numbers, and passport numbers.
Here’s a demonstration of how this works.
This outputs the PII entity and indices of where the info was found.
To maintain utility without compromising privacy, you have the option to replace PII with fake names and then forward the modified prompt to the LLM for further processing.
The processed prompt will then be.
Other options for the replace_method
parameter include: random
(to replace
the detected PII with random character), category
(to mask the PII with the
entity type) and mask
(simply replace with *
).
Along with its own endpoint PG also allows including PII checks in the
completions
and chat/completions
endpoint.
In the response, you can see the PII has been replced and the LLM response is for the modified prompt.
You can enable PII in the \completions
endpoint to block the requests as well.
Enabling this will lead to blocking the prompt with PII to reach the LLM. You will
be seeing this response with a 400 Bad Request
error code.
You can add the pii
check to the chat completions as well. This is illustrated
below.
This will produce an output like the following.
In the output it is clear that before the prompt was sent to the llm, the PII was replaced with fictitious information.