Using LLMs - Prediction Guard in action
We've worked on a wide variety of enterprise LLM use cases across various industries. Throughout this work, we've found the following principles of LLM usage to be critical and transferable. If you are getting started with Prediction Guard or LLMs in general, the below tutorials about LLM usage should help you level up quickly. Each tutorial can be run in Google Colab without any local environment setup.
- Accessing LLMs - Use your Prediction Guard access token to run your first text completions
- Basic prompting - Learn how to prompt these models for autocomplete, zero shot instructions, and few shot (or in context) learning
- Chat - Learn how to use LLMs to generate responses in a message thread and build chatbots
- Prompt engineering - Leverage prompt templates and model parameters to hone in on the right workflows
- Augmentation and retrieval - Augment your prompts with your own data
- Agents - Create more complex automations with agentic workflows
- Data Extraction - Extract relevant data from text output and perform factuality checks
Note - These examples will be given in Python, but the same workflows could be accomplished in other languages via our REST API.