Completions with Streaming
Retrieve text completions based on the provided input.
Headers
Bearer authentication of the form Bearer <token>, where token is your auth token.
Request
The chat model to use for generating completions.
The prompt to use for generating completions.
A value between -2.0 and 2.0, with positive values increasingly penalizing new tokens based on their frequency so far in order to decrease further occurrences.
Modifies the likelihood of specified tokens appearing in a response.
The maximum number of tokens in the generated completion.
A value between -2.0 and 2.0, with positive values causing a flat reduction of new tokens based on their existing presence so far in order to decrease further occurrences.
Turn streaming on.
The temperature parameter for controlling randomness in completions.
The diversity of the generated text based on nucleus sampling.
The diversity of the generated text based on top-k sampling.
Options to affect the output of the response.
Options to affect the input of the request.
Response
Successful response.
Unique ID for the completion.
Type of object (completion).
Timestamp of when the completion was created.
The model used for generating the result.
The set of result choices.