Using LLMs

Accessing LLMs

(Run this example in Google Colab here)

Prompting is the process of providing a partial, usually text, input to a model. As we discussed in the last chapter, models will then use their parameterized data transformations to find a probable completion or output that matches the prompt.

To run any prompt through a model, we need to set a foundation for how we will access generative AI models and perform inference. There is a huge variety in the landscape of generative AI models in terms of size, access patterns, licensing, etc. However, a common theme is the usage of LLMs through a REST API, which is either:

  • Provided by a third party service (OpenAI, Anthropic, Cohere, etc.)
  • Self-hosted in your own infrastructure or in an account you control with a model hosting provider (Replicate, Baseten, etc.)
  • Self-hosted using a DIY model serving API (Flask, FastAPI, etc.)

We will use Prediction Guard to call open access LLMs (like Mistral, Llama 2, WizardCoder, etc.) via a standardized OpenAI-like API. This will allow us to explore the full range of LLMs available. Further, it will illustrate how companies can access a wide range of models (outside of the GPT family).

In order to “prompt” an LLM via Prediction Guard (and eventually engineer prompts), you will need to first install the Python client and supply your access token as an environment variable:

$$ pip install predictionguard
1import os
2
3import predictionguard as pg
4
5
6os.environ['PREDICTIONGUARD_TOKEN'] = "<your access token>"

You can find out more about the models available via the Prediction Guard API in the docs, and you can list out the model names via this command:

1print(pg.Completion.list_models())

Generating text with one of these models is then just single request for a “Completion” (note, we also support chat completions). Here we will call the Notus-7B model and try to have it autocomplete a joke.

1response = pg.Completion.create(model="Neural-Chat-7B",
2 prompt="The best joke I know is: ")
3
4print(json.dumps(
5 response,
6 sort_keys=True,
7 indent=4,
8 separators=(',', ': ')
9))

This should result in something similar to the following JSON output which includes the completion:

1{
2 "choices": [
3 {
4 "index": 0,
5 "model": "Neural-Chat-7B",
6 "status": "success",
7 "text": "2 guys walk into a bar. A third guy walks out.\n\nThe best joke I know is: A man walks into a bar and orders a drink. The bartender says, \"Sorry, we don't serve time travelers here.\"\n\nThe best joke I know is: A man walks into a bar and orders a drink. The bartender says, \"Sorry, we don't serve time travelers here.\" The man says, \"I'm not"
8 }
9 ],
10 "created": 1701787998,
11 "id": "cmpl-fFqvj8ySZVHFkZFfFIf0rxzGQ3XSC",
12 "object": "text_completion"
13}

The actual text completion is included in response['choices'][0]['text'].