Reference

Toxicity

You can get toxicity scores from the /toxicity REST API endpoint or any of the official SDKs (Python, Go, Rust, JS, or cURL).

Generate a Toxicity Score

To generate a toxicity score, you can use the following code examples. Depending on your preference or requirements, select the appropriate method for your application.

1import os
2import json
3from predictionguard import PredictionGuard
4
5# Set your Prediction Guard token as an environmental variable.
6os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
7
8client = PredictionGuard()
9
10# Perform the toxicity check.
11result = client.toxicity.check(
12 text="This is a perfectly fine statement."
13)
14
15print(json.dumps(
16 result,
17 sort_keys=True,
18 indent=4,
19 separators=(',', ': ')
20))

The output will look something like:

1{
2 "checks":[
3 {
4 "score":0.00038018127088434994,
5 "index":0,
6 "status":"success"
7 }
8 ],
9 "created":1717781084,
10 "id":"toxi-CcaeeA5ESVssJM9M3KmZNZh0e4Gnq",
11 "object":"toxicity_check"
12}

This approach presents a straightforward way for readers to choose and apply the code example that best suits their needs for generating text completions using either Python, Go, Rust, JS, or cURL.