Reference

Factuality

You can get factuality scores (or rather factual consistency scores) from the /factuality REST API endpoint or any of the official SDKs (Python, Go, Rust, JS, or cURL).

The output will include a score that ranges from 0.0 to 1.0. The higher the score, the more factuality consistency between the text and the reference.

Generate a Factuality Score

To generate a factuality score, you can use the following code examples. Depending on your preference or requirements, select the appropriate method for your application.

1import os
2import json
3
4from predictionguard import PredictionGuard
5
6# Set your Prediction Guard token as an environmental variable.
7os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
8
9client = PredictionGuard()
10
11# Perform the factual consistency check.
12result = client.factuality.check(
13 reference="The sky is blue.",
14 text="The sky is green."
15)
16
17print(json.dumps(
18 result,
19 sort_keys=True,
20 indent=4,
21 separators=(',', ': ')
22))

The output will look something like:

1{
2 "checks":[
3 {
4 "score":0.26569077372550964,
5 "index":0,
6 "status":"success"
7 }
8 ],
9 "created":1717780745,
10 "id":"fact-04pim5X8ZXDbwTJzCA8aeKDxRDh6H",
11 "object":"factuality_check"
12}

This approach presents a straightforward way for readers to choose and apply the code example that best suits their needs for generating text completions using either Python, Go, Rust, JS, or cURL.