You can get toxicity scores from the /toxicity REST API endpoint or any of the official SDKs (Python, Go, Rust, JS, or cURL).

Generate a Toxicity Score

To generate a toxicity score, you can use the following code examples. Depending on your preference or requirements, select the appropriate method for your application.

1import os
2import json
3from predictionguard import PredictionGuard
4
5# Set your Prediction Guard token as an environmental variable.
6os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
7
8client = PredictionGuard()
9
10# Perform the toxicity check.
11result = client.toxicity.check(
12 text="This is a perfectly fine statement."
13)
14
15print(json.dumps(
16 result,
17 sort_keys=True,
18 indent=4,
19 separators=(',', ': ')
20))

The output will look something like:

1{
2 "checks": [
3 {
4 "index": 0,
5 "score": 0.0003801816201303154
6 }
7 ]
8 "id": "toxi-e97bcee4-de62-4214-bf9b-dafa9922931c",
9 "object": "toxicity.check",
10 "created": 1727795168,
11}

This approach presents a straightforward way for readers to choose and apply the code example that best suits their needs for generating text completions using either Python, Go, Rust, JS, or cURL.