Using LLMs

Using LLMs - Prediction Guard in action

We've worked on a wide variety of enterprise LLM use cases across various industries. Throughout this work, we've found the following principles of LLM usage to be critical and transferable. If you are getting started with Prediction Guard or LLMs in general, the below tutorials about LLM usage should help you level up quickly. Each tutorial can be run in Google Colab without any local environment setup.

  1. Accessing LLMs - Use your Prediction Guard access token to run your first text completions
  2. Basic prompting - Learn how to prompt these models for autocomplete, zero shot instructions, and few shot (or in context) learning
  3. Chat - Learn how to use LLMs to generate responses in a message thread and build chatbots
  4. Prompt engineering - Leverage prompt templates and model parameters to hone in on the right workflows
  5. Augmentation and retrieval - Augment your prompts with your own data
  6. Agents - Create more complex automations with agentic workflows
  7. Data Extraction - Extract relevant data from text output and perform factuality checks

Note - These examples will be given in Python, but the same workflows could be accomplished in other languages via our REST API.