Actions
Ask Gemini
Utilize Gemini to perform AI-driven task such as text generation, summarization, translation, and more.
External Documentation
To learn more, visit the Gemini documentation.
Basic Parameters
Parameter | Description |
---|---|
Model | The LLM model used to complete the AI task. |
System Instructions | Define guidelines for system behavior, user interactions, and constraints. You can include sections such as:* Role * Instructions * Rules * Examples For example:You are an AI assistant designed to help users by providing accurate, relevant, and helpful responses. Maintain a neutral and professional tone, be concise yet informative, and always strive to be clear and understandable. |
User Prompt | The message containing your prompt request, which the LLM model will interpret to generate a response. |
Advanced Parameters
Parameter | Description |
---|---|
Maximum Tokens | The maximum number of tokens needed to generate a response. Set the token limit based on your needs:1. Short & Concise Answers (256–1,000 tokens) - Ideal for brief responses, summaries, or direct answers.2. Detailed & Elaborate Responses (1,000–8,000 tokens) - Best for in-depth explanations, multi-step reasoning, or complex insights.3. Data Processing & Transformations (2,000–10,000+ tokens) - Use higher limits for handling structured data (e.g., JSON, CSV) or processing large text inputs.Note: For optimal performance, start with a lower limit and increase as needed. If not provided, the selected model default value is used. |
Temperature | The randomness of the output.Lower values produce more focused and deterministic outputs, while higher values increase creativity and variation.Valid values range between 0.0 and 2.0 .Note: If not provided, the selected model default value is used. |
Workflow Library Example
Ask Gemini and Send Results Via Email
Preview this Workflow on desktop