Using Prediction Guard gives you quick and easy access to state-of-the-art LLMs, without you needing to spend weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up a secure infrastructure for model deployments.
LLMs are hosted by Prediction Guard in a secure, privacy conserving environment built in partnership with Intel’s Liftoff program for startups.
Note - Prediction Guard does NOT save or share any data sent to these models (or responses from the models). Further, we are able to sign a BAA for customers needing HIPAA compliance. Contact support with any questions.
Note - We only integrate models that are licensed permissively for commercial use.
Open Access LLMs (what most of our customers use) 🚀
Open access models are amazing these days! Each of these models was trained by a talented team and released publicly under a permissive license. The data used to train each model and the prompt formatting for each model varies. We’ve tried to give you some of the relevant details here, but shoot us a message in Slack with any questions.
Models available in /completions
and /chat/completions
endpoints
Model Name | Type | Use Case | Prompt Format | Context Length | More Info |
---|---|---|---|---|---|
Hermes-2-Pro-Llama-3-8B | Chat | Instruction following or chat-like applications | ChatML | 4096 | link |
Nous-Hermes-Llama2-13B | Text Generation | Generating output in response to arbitrary instructions | Alpaca | 4096 | link |
Hermes-2-Pro-Mistral-7B | Chat | Instruction following or chat-like applications | ChatML | 4096 | link |
Neural-Chat-7B | Chat | Instruction following or chat-like applications | Neural Chat | 4096 | link |
Yi-34B-Chat | Chat | Instruction following in English or Chinese | ChatML | 2048 | link |
deepseek-coder-6.7b-instruct | Code Generation | Generating computer code or answering tech questions | Deepseek | 4096 | link |