LangChain is one of the most popular AI projects, and for good reason! LangChain helps you “Build applications with LLMs through composability.” LangChain doesn’t host LLMs or provide a standardized API for controlling LLMs, which is addressed by Prediction Guard. Therefore, combining the two (Prediction Guard + LangChain) gives you a framework for developing controlled and compliant applications powered by language models.
Installation and Setup
- Install the Python SDK with
pip install predictionguard
- Get a Prediction Guard access token (as described here) and set it as the environment variable
PREDICTIONGUARD_TOKEN
.
LLM Wrapper
There exists a Prediction Guard LLM wrapper, which you can access with
You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
You can also provide your access token directly as an argument:
Finally, you can provide an “output” argument that is used to validate the output of the LLM:
Example usage
Basic usage of the controlled or guarded LLM wrapper:
Basic LLM Chaining with the Prediction Guard wrapper: