Factuality
Navigating the LLM landscape can be tricky, especially with hallucinations or inaccurate answers. Whether you’re integrating LLMs into customer-facing products or using them for internal data processing, ensuring the accuracy of the information provided is essential. Prediction Guard uses State Of The Art (SOTA) models for factuality check to evaluate the outputs of LLMs against the context of the prompts.
You can either add factuality=True or use /factuality endpoint to directly access this functionality. Let’s use the following prompt template to determine some features of an instragram post announcing new products. First, we can define a prompt template.
We can then check the factulaity score of the answer that is generated by the LLM.
This outputs something similar to.
Now, we could try to make the model hallucinate. However, the hallucination is caught and Prediction Guard returns an error status.
This outputs something similar to.
Standalone Factuality Functionality
You can also call the factuality checking functionality directly using the
/factuality
endpoint, which will enable you to
configure thresholds and score arbitrary inputs.