External DocumentationTo learn more, visit the LiteLLM documentation.
Basic Parameters
| Parameter | Description |
|---|---|
| Completions Number | The number of completions to generate. |
| Max Tokens | The maximum number of tokens to generate in the completion. |
| Model ID | The ID of the model to get text completions with. |
| Prompt | The prompt to send to the model. |
| Stop Tokens | A comma-separated list of tokens that will stop the generation. Example: \n, END, --- |
| Temperature | The degree of randomness to use in the generation. Valid range is 0.0 - 2.0. |
| Top P | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with Top P probability mass. |
Advanced Parameters
| Parameter | Description |
|---|---|
| Additional Parameters | A JSON object for additional body parameters. Values specified in this parameter will override equivalent parameters. For example: The object must follow the vendor’s structure as defined in the API documentation. |
| Frequency Penalty | Penalize new tokens based on their existing frequency. Accept values from -2.0 to 2.0. |
| Include Log Probabilities | Select to include the log probabilities for the most likely tokens. |
| Logit Bias | A JSON object mapping token IDs to bias values. This modifies the probability of generating specific tokens. Example: |
| Presence Penalty | Penalize tokens based on whether they appear in the text so far. Valid range is -2.0 - 2.0. |
| Seed | A seed to make the generated output more deterministic. |
| User ID | The ID of the user to associate with the request. |