Prompt Formatting
You might be used to entering simple text prompts into systems like ChatGPT. However, when you utilize certain open access LLMs directly, you might want to follow a specific prompt format. These models are fine-tuned using prompt data, and if you match your prompt formats to that training data format then you can see boosts in performance.
Check out the model details page to learn which prompt formats match certain LLMs. We’ve included some of the most important prompt formats below.
Note on Chat model prompts - For your convenience, we automatically apply
the right prompt formats when you supply a messages
object to our
/chat/completions
endpoint or via the client.chat.completions.create()
method in the Python client. You don’t have to add in special tokens or apply
the below prompt formats as this will duplicate the formatting. However, if
you want to use chat-tuned models in the /completions
endpoint or via the
client.completions.create()
method, you should apply the appropriate one of
the below prompt formats.
Alpaca
This format has two options. First, for prompts that are single instructions, without relevant context (e.g., retrieved context):
For prompts where context is injected:
(Replace the portions of the prompt below in <...>
with the appropriate
information, and do not keep the <
or >
characters)
Neural Chat
(Replace the portions of the prompt below in <...>
with the appropriate
information, and do not keep the <
or >
characters)
ChatML
(Replace the portions of the prompt below in curly braces {...}
with the
appropriate information, and do not keep the curly braces)
Deepseek
(Replace the portions of the prompt below in curly braces {...}
with the
appropriate information, and do not keep the curly braces)