Reference

Toxicity

You can get toxicity scores from the /toxicity endpoint or Toxicity class in the Python client. This endpoint/Class takes a single text parameter, which should include the candidate text to be scored for toxicity. The output will include a score that ranges from 0.0 to 1.0. The higher the score, the more toxic the text.

Generate a toxicity score

To generate a toxicity score, you can use the following code examples. Depending on your preference or requirements, select the appropriate method for your application.

1import os
2import json
3import predictionguard as pg
4
5# Set your Prediction Guard token as an environmental variable.
6os.environ["PREDICTIONGUARD_TOKEN"] = "<your access token>"
7
8# Perform the toxicity check.
9result = pg.Toxicity.check(
10 text="This is a perfectly fine statement"
11)
12
13print(json.dumps(
14 result,
15 sort_keys=True,
16 indent=4,
17 separators=(',', ': ')
18))

The output will look something like:

1{
2 "checks": [
3 {
4 "index": 0,
5 "score": 0.00036882987478747964,
6 "status": "success"
7 }
8 ],
9 "created": 1701721860,
10 "id": "toxi-REU4ZqFADGAiU6xJmN9PgtlgBO6x9",
11 "object": "toxicity_check"
12}