Skip to main content
Generate a text completion from a model based on a prompt. The action will return the generated text along with token usage statistics.
External DocumentationTo learn more, visit the LiteLLM documentation.

Basic Parameters

ParameterDescription
Completions NumberThe number of completions to generate.
Max TokensThe maximum number of tokens to generate in the completion.
Model IDThe ID of the model to get text completions with.
PromptThe prompt to send to the model.
Stop TokensA comma-separated list of tokens that will stop the generation.

Example: \n, END, ---
TemperatureThe degree of randomness to use in the generation. Valid range is 0.0 - 2.0.
Top PAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with Top P probability mass.

Advanced Parameters

ParameterDescription
Additional ParametersA JSON object for additional body parameters. Values specified in this parameter will override equivalent parameters.

For example:
{
“first_key”: 12345,
“second_key”: “some_value”
}
The object must follow the vendor’s structure as defined in the API documentation.
Frequency PenaltyPenalize new tokens based on their existing frequency. Accept values from -2.0 to 2.0.
Include Log ProbabilitiesSelect to include the log probabilities for the most likely tokens.
Logit BiasA JSON object mapping token IDs to bias values. This modifies the probability of generating specific tokens.

Example:
{
“2683”: -100,
“7211”: 5
}
Presence PenaltyPenalize tokens based on whether they appear in the text so far. Valid range is -2.0 - 2.0.
SeedA seed to make the generated output more deterministic.
User IDThe ID of the user to associate with the request.

Example Output

{
	"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
	"object": "text_completion",
	"created": 1589478378,
	"model": "gpt-3.5-turbo-instruct",
	"system_fingerprint": "fp_44709d6fcb",
	"choices": [
		{
			"text": "\n\nThis is indeed a test",
			"index": 0,
			"logprobs": null,
			"finish_reason": "length"
		}
	],
	"usage": {
		"prompt_tokens": 5,
		"completion_tokens": 7,
		"total_tokens": 12
	}
}

Workflow Library Example

Get Text Completion with Litellm and Send Results Via Email
Workflow LibraryPreview this Workflow on desktop