Quick Start

Reliable, future-proof AI predictions

Technical teams need to figure out how to integrate the latest Large Language Models (LLMs), but:

  • You can’t build robust systems with inconsistent, unvalidated outputs; and
  • LLM integrations scare corporate lawyers, finance departments, and security professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA), leaked IP/PII, and “injection” vulnerabilities.

Some companies are moving forward anyway by investing tons of engineering time/money in their own wrappers around LLMs and expensive hosting with OpenAI/Azure. Others are ignoring these issues and pressing forward with fragile and risky LLM integrations.

At Prediction Guard, we think that you should get useful output from compliant AI systems (without crazy implementation/hosting costs), so our solution lets you:

  1. De-risk LLM inputs to remove PII and prompt injections;
  2. Validate and check LLM outputs to guard against hallucination, toxicity, and inconsistencies; and
  3. Implement private and compliant LLM systems (HIPAA and self-hosted) that give your legal counsel a warm fuzzy feeling while still delighting your customers with AI features.

Sounds pretty great, right? Follow the steps below to start leveraging trustworthy LLMs:

1

Get access to Prediction Guard Enterprise

We host and control the latest LLMs for you in our secure and privacy-conserving enterprise platform, so you can focus on your prompts and chains. To access the hosted LLMs, contact us here to get an enterprise access token. You will need this access token to continue.

2

Start using one of our LLMs!

Suppose you want to prompt an LLM to answer a user query from a chat application. You can set up a message thread, which includes a system prompt (that instructs the LLM how to behave in responding) as follows:

1[
2 {
3 "role": "system",
4 "content": "You are a helpful assistant. Your model is hosted by Prediction Guard, a leading AI company."
5 },
6 {
7 "role": "user",
8 "content": "Where can I access the LLMs in a safe and secure environment?"
9 }
10]
3

Download the SDK for your favorite language

You can then use any of our official SDKs or REST API to prompt one of our LLMs!

1import json
2import os
3
4from predictionguard import PredictionGuard
5
6# You can set your Prediction Guard API Key as an env variable named "PREDICTIONGUARD_API_KEY",
7
8# or when creating the client object
9
10client = PredictionGuard(
11api_key="<api key>"
12)
13
14messages = [
15{
16"role": "system",
17"content": "You are a helpful chatbot that helps people learn."
18},
19{
20"role": "user",
21"content": "What is a good way to learn to code?"
22}
23]
24
25result = client.chat.completions.create(
26model="Hermes-2-Pro-Llama-3-8B",
27messages=messages,
28max_tokens=100
29)
30
31print(json.dumps(
32result,
33sort_keys=True,
34indent=4,
35separators=(',', ': ')
36))

Note, you will need to replace <your api key> in the above examples with your actual access token.