Getting Started

Getting Started

Prediction Guard allows you to seamlessly integrate private, controlled, and compliant Large Language Models (LLM) functionality. In addition to providing a scalable LLM API, we enable you to prevent hallucinations, institute governance, and ensure compliance (all while delighting customers with magical AI features).

Using Prediction Guard gives you quick and easy access to state-of-the-art LLMs, without you needing to spend weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up a secure infrastructure for model deployments.

LLMs are hosted by Prediction Guard in a secure, privacy conserving environment built in partnership with Intel’s Liftoff program for startups.

Note - Prediction Guard does NOT save or share any data sent to these models (or responses from the models). Further, we are able to sign a BAA for customers needing HIPAA compliance. Contact support with any questions.

Note - We only integrate models that are licensed permissively for commercial use.

Open Access LLMs (what most of our customers use) 🚀

Open access models are amazing these days! Each of these models was trained by a talented team and released publicly under a permissive license. The data used to train each model and the prompt formatting for each model varies. We’ve tried to give you some of the relevant details here, but shoot us a message in Discord with any questions.

Getting Started

Here you’ll find information about how to integrate with our API and example implementations.