Reference

Factuality

You can get factuality scores (or rather factual consistency scores) from the /factuality endpoint or Factuality class in the Python client. This endpoint/Class takes two parameters:

  • reference - A reference text with which you want to compare another text for factual consistency.
  • text - The candidate text that will be scored for factual consistency.

The output will include a score that ranges from 0.0 to 1.0. The higher the score, the more factuality consistency between the text and the reference.

Generate a factuality score

To generate a factuality score, you can use the following code examples. Depending on your preference or requirements, select the appropriate method for your application.

1import os
2import json
3
4import predictionguard as pg
5
6# Set your Prediction Guard token as an environmental variable.
7os.environ["PREDICTIONGUARD_TOKEN"] = "<your access token>"
8
9# Perform the factual consistency check.
10result = pg.Factuality.check(
11 reference="The sky is blue",
12 text="The sky is green"
13)
14
15print(json.dumps(
16 result,
17 sort_keys=True,
18 indent=4,
19 separators=(',', ': ')
20))

The output will look something like:

1{
2 "checks": [
3 {
4 "score": 0.12404686957597733,
5 "index": 0,
6 "status": "success"
7 }
8 ],
9 "created": 1701721456,
10 "id": "fact-O0CdxbefFwSRo7uypla7hdUka3pPf",
11 "object": "factuality_check"
12}