Guides

Using Prediction Guard proxies in LangChain

LangChain is one of the most popular AI projects, and for good reason! LangChain helps you “Build applications with LLMs through composability.” LangChain doesn’t host LLMs or provide a standardized API for controlling LLMs, which is addressed by Prediction Guard. Therefore, combining the two (Prediction Guard + LangChain) gives you a framework for developing controlled and compliant applications powered by language models.

Installation and Setup

  • Install the Python SDK with pip install predictionguard
  • Get a Prediction Guard api key (as described here) and set it as the environment variable PREDICTIONGUARD_API_KEY.

LLM Wrapper

There exists a Prediction Guard LLM wrapper, which you can access with

1from langchain.llms import PredictionGuard

You can provide the name of the Prediction Guard model as an argument when initializing the LLM:

1pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B")

You can also provide your api key directly as an argument:

1pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B", token="<api key>")

Finally, you can provide an “output” argument that is used to validate the output of the LLM:

1pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B", output={"toxicity": True})

Example usage

Basic usage of the controlled or guarded LLM wrapper:

1import os
2
3from langchain.llms import PredictionGuard
4from langchain import PromptTemplate, LLMChain
5
6# Define a prompt template
7template = """Respond to the following query based on the context.
8
9Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦
10Exclusive Candle Box - $80
11Monthly Candle Box - $45 (NEW!)
12Scent of The Month Box - $28 (NEW!)
13Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
14
15Query: {query}
16
17Result: """
18prompt = PromptTemplate(template=template, input_variables=["query"])
19
20# With "guarding" or controlling the output of the LLM. See the
21# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
22# control the output with integer, float, boolean, JSON, and other types and
23# structures.
24pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B")
25pgllm(prompt.format(query="What kind of post is this?"))

Basic LLM Chaining with the Prediction Guard wrapper:

1import os
2
3from langchain import PromptTemplate, LLMChain
4from langchain.llms import PredictionGuard
5
6# Your Prediction Guard API key. Get one at predictionguard.com
7os.environ["PREDICTIONGUARD_API_KEY"] = "<api key>"
8
9pgllm = PredictionGuard(model="Nous-Hermes-Llama2-13B")
10
11template = """Question: {question}
12
13Answer: Let's think step by step."""
14prompt = PromptTemplate(template=template, input_variables=["question"])
15llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
16
17question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
18
19llm_chain.predict(question=question)