Generate a model response.

External Documentation

To learn more, visit the Gemini documentation.

Basic Parameters

ParameterDescription
Content PartsThe content of the current conversation with the model.Example:[{ "text": "Write a story about a magic backpack."}]
ModelThe name of the model to use for generating the completion.Example: models/gemini-2.0-flashNote: Only the latest model versions are supported.

Advanced Parameters

ParameterDescription
Cached ContentThe name of the content cached to use as context to serve the prediction.Cached content can be only used with model it was created for.Format: cachedContents/{cachedContent}For more information about the Cached Content parameter, visit Gemini’s API documentation.
Max Output TokensThe maximum number of tokens to include in a response.
RoleThe producer of the content. Useful to set for multi-turn conversations.
Safety SettingsA list of unique safety setting instances for blocking unsafe content.Example:[ {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_ONLY_HIGH"}, {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"}]For more information about the Safety Settings parameter, visit Gemini’s API documentation.
SeedSeed used in decoding. If not set, the request uses a randomly generated seed.
Stop SequencesA comma-separated list of character sequences (up to 5) that will stop output generation. If specified, Gemini will stop at the first appearance of a Stop Sequence. The stop sequence will not be included as part of the response.Example: STOP,END
System InstructionDeveloper set system instruction(s).For example:{ "parts": { "text": "You are a cat. Your name is Neko." }, "contents": { "parts": { "text": "Hello there" } }}For more information about the System Instruction parameter, visit Gemini’s API documentation.
TemperatureThe randomness of the output. Values can range from 0.0 to 2.0.
Tool ConfigTool configuration for any tool specified in the Tools parameter.For more information about the Tool Config parameter, visit Gemini’s API documentation.
ToolsA list of tools the model may use to generate the next response.For more information about the Tools parameter, visit Gemini’s API documentation.

Example Output

{
	"candidates": [
		{
			"content": {
				"parts": [
					{
						"text": "Maya hummed, tracing the faded patch on her worn-out backpack. It had been her grandma's, passed down along with stories of adventures"
					}
				],
				"role": "model"
			},
			"finishReason": "MAX_TOKENS",
			"avgLogprobs": -1.2023602803548177
		}
	],
	"usageMetadata": {
		"promptTokenCount": 8,
		"candidatesTokenCount": 30,
		"totalTokenCount": 38,
		"promptTokensDetails": [
			{
				"modality": "TEXT",
				"tokenCount": 8
			}
		],
		"candidatesTokensDetails": [
			{
				"modality": "TEXT",
				"tokenCount": 30
			}
		]
	},
	"modelVersion": "gemini-2.0-flash-001"
}

Workflow Library Example

Generate Completion with Gemini and Send Results Via Email

Preview this Workflow on desktop