Completions with Streaming
Retrieve text completions based on the provided input.
Headers
Authorization
Bearer authentication of the form Bearer <token>, where token is your auth token.
Request
This endpoint expects an object.
model
The chat model to use for generating completions.
prompt
The prompt to use for generating completions.
frequency_penalty
A value between -2.0 and 2.0, with positive values increasingly penalizing new tokens based on their frequency so far in order to decrease further occurrences.
logit_bias
Modifies the likelihood of specified tokens appearing in a response.
max_tokens
The maximum number of tokens in the generated completion.
presence_penalty
A value between -2.0 and 2.0, with positive values causing a flat reduction of new tokens based on their existing presence so far in order to decrease further occurrences.
stream
Turn streaming on.
stop
temperature
The temperature parameter for controlling randomness in completions.
top_p
The diversity of the generated text based on nucleus sampling.
top_k
The diversity of the generated text based on top-k sampling.
output
Options to affect the output of the response.
input
Options to affect the input of the request.
Response
Successful response.
id
Unique ID for the completion.
object
Type of object (completion).
created
Timestamp of when the completion was created.
model
The model used for generating the result.
choices
The set of result choices.