Guides

Using Prediction Guard proxies in LangChain

LangChain is one of the most popular AI projects, and for good reason! LangChain helps you “Build applications with LLMs through composability.” LangChain doesn’t host LLMs or provide a standardized API for controlling LLMs, which is addressed by Prediction Guard. Therefore, combining the two (Prediction Guard + LangChain) gives you a framework for developing controlled and compliant applications powered by language models.

Installation and Setup

  • Install the Python SDK with pip install predictionguard
  • Get a Prediction Guard access token (as described here) and set it as the environment variable PREDICTIONGUARD_TOKEN.

LLM Wrapper

There exists a Prediction Guard LLM wrapper, which you can access with

1from langchain.llms import PredictionGuard

You can provide the name of the Prediction Guard model as an argument when initializing the LLM:

1pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B")

You can also provide your access token directly as an argument:

1pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B", token="<your access token>")

Finally, you can provide an “output” argument that is used to validate the output of the LLM:

1pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B", output={"toxicity": True})

Example usage

Basic usage of the controlled or guarded LLM wrapper:

1import os
2
3import predictionguard as pg
4from langchain.llms import PredictionGuard
5from langchain import PromptTemplate, LLMChain
6
7# Your Prediction Guard API key. Get one at predictionguard.com
8os.environ["PREDICTIONGUARD_TOKEN"] = "<your access token>"
9
10# Define a prompt template
11template = """Respond to the following query based on the context.
12
13Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦
14Exclusive Candle Box - $80
15Monthly Candle Box - $45 (NEW!)
16Scent of The Month Box - $28 (NEW!)
17Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
18
19Query: {query}
20
21Result: """
22prompt = PromptTemplate(template=template, input_variables=["query"])
23
24# With "guarding" or controlling the output of the LLM. See the
25# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
26# control the output with integer, float, boolean, JSON, and other types and
27# structures.
28pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B")
29pgllm(prompt.format(query="What kind of post is this?"))

Basic LLM Chaining with the Prediction Guard wrapper:

1import os
2
3from langchain import PromptTemplate, LLMChain
4from langchain.llms import PredictionGuard
5
6# Your Prediction Guard API key. Get one at predictionguard.com
7os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
8
9pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B")
10
11template = """Question: {question}
12
13Answer: Let's think step by step."""
14prompt = PromptTemplate(template=template, input_variables=["question"])
15llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
16
17question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
18
19llm_chain.predict(question=question)