Navigating the LLM landscape can be tricky, especially with hallucinations or inaccurate answers. Whether you’re integrating LLMs into customer-facing products or using them for internal data processing, ensuring the accuracy of the information provided is essential. Prediction Guard uses State Of The Art (SOTA) models for factuality check to evaluate the outputs of LLMs against the context of the prompts.

You can either add factuality=True or use /factuality endpoint to directly access this functionality. Let’s use the following prompt template to determine some features of an instragram post announcing new products. First, we can define a prompt template.

copy
1import os
2import json
3
4from predictionguard import PredictionGuard
5from langchain.prompts import PromptTemplate
6
7# Set your Prediction Guard token as an environmental variable.
8os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
9
10client = PredictionGuard()
11
12template = """### Instruction:
13Read the context below and respond with an answer to the question.
14
15### Input:
16Context: {context}
17
18Question: {question}
19
20### Response:
21"""
22
23prompt = PromptTemplate(
24 input_variables=["context", "question"],
25 template=template,
26)
27
28context = "California is a state in the Western United States. With over 38.9 million residents across a total area of approximately 163,696 square miles (423,970 km2), it is the most populous U.S. state, the third-largest U.S. state by area, and the most populated subnational entity in North America. California borders Oregon to the north, Nevada and Arizona to the east, and the Mexican state of Baja California to the south; it has a coastline along the Pacific Ocean to the west. "
29
30result = client.completions.create(
31 model="Hermes-2-Pro-Llama-3-8B",
32 prompt=prompt.format(
33 context=context,
34 question="What is California?"
35 )
36)

We can then check the factulaity score of the answer that is generated by the LLM.

copy
1fact_score = client.factuality.check(
2 reference=context,
3 text=result['choices'][0]['text']
4)
5
6print("COMPLETION:", result['choices'][0]['text'])
7print("FACT SCORE:", fact_score['checks'][0]['score'])

This outputs something similar to.

COMPLETION: California is a state located in the western region of the United States. It is the most populous state in the country, with over 38.9 million residents, and the third-largest state by area, covering approximately 163,696 square miles (423,970 km2). California shares its borders with Oregon to the north, Nevada and Arizona to the east, and the Mexican state of Baja California to the south. It also
FACT SCORE: 0.8541514873504639

Now, we could try to make the model hallucinate. However, the hallucination is caught and Prediction Guard returns an error status.

copy
1result = client.completions.create(
2 model="Hermes-2-Pro-Llama-3-8B",
3 prompt=prompt.format(
4 context=context,
5 question="Make up something completely fictitious about California. Contradict a fact in the given context."
6 )
7)
8
9fact_score = client.factuality.check(
10 reference=context,
11 text=result['choices'][0]['text']
12)
13
14print("COMPLETION:", result['choices'][0]['text'])
15print("FACT SCORE:", fact_score['checks'][0]['score'])

This outputs something similar to.

COMPLETION: California is the smallest state in the United States.
FACT SCORE: 0.12891793251037598

Standalone Factuality Functionality

You can also call the factuality checking functionality directly using the /factuality endpoint, which will enable you to configure thresholds and score arbitrary inputs.