Send a prompt to the model and get the model’s response.

Note: This is a legacy action and its only available with older models. It is highly recommended to use Create Chat Completion action instead.

External Documentation

To learn more, visit the OpenAI documentation.

Basic Parameters

ParameterDescription
ModelID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
PromptThe prompt to generate a completion for.

Advanced Parameters

ParameterDescription
StopSequences of text that will instruct the model to stop generating further tokens once any of these sequences are encountered.The response will not include the stop sequence(s) themselves.Valid values are:* String: A single stop sequence (e.g., “END”).* List: A comma separated list of up to 4 stop sequences (e.g., “END,STOP”).Note: The quotes are used only for the example. Avoid using quotes in the stop parameter.
TemperatureWhat sampling temperature to use, between 0 and 2.Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Token LimitThe maximum number of tokens to generate in the completion. A higher token limit will result in a longer response.
UserA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.

Example Output

{
	"choices": [
		{
			"finish_reason": "string",
			"index": 0,
			"logprobs": {
				"text_offset": [
					0
				],
				"token_logprobs": [
					0
				],
				"tokens": [
					"string"
				],
				"top_logprobs": [
					{}
				]
			},
			"text": "string"
		}
	],
	"created": 0,
	"id": "string",
	"model": "string",
	"object": "string",
	"usage": {
		"completion_tokens": 0,
		"prompt_tokens": 0,
		"total_tokens": 0
	}
}

Workflow Library Example

Create Completion with Openai and Send Results Via Email

Preview this Workflow on desktop