You can get toxicity scores from the /toxicity
endpoint or Toxicity
class in the Python client. This endpoint/Class takes a single text
parameter, which should include the candidate text to be scored for toxicity. The output will include a score
that ranges from 0.0 to 1.0. The higher the score, the more toxic the text
.
Generate a toxicity score
To generate a toxicity score, you can use the following code examples. Depending on your preference or requirements, select the appropriate method for your application.
The output will look something like: