External DocumentationTo learn more, visit the OpenAI documentation.
Basic Parameters
| Parameter | Description |
|---|---|
| Model | The LLM model used to complete the AI task. |
| System Instructions | Define guidelines for system behavior, user interactions, and constraints. You can include sections such as: * Role* Instructions* Rules* ExamplesFor example: |
| User Prompt | The message containing your prompt request, which the LLM model will interpret to generate a response. |
Advanced Parameters
| Parameter | Description |
|---|---|
| Maximum Tokens | The maximum number of tokens needed to generate a response. Set the token limit based on your needs: 1. Short & Concise Answers (256–1,000 tokens) - Ideal for brief responses, summaries, or direct answers. 2. Detailed & Elaborate Responses (1,000–8,000 tokens) - Best for in-depth explanations, multi-step reasoning, or complex insights. 3. Data Processing & Transformations (2,000–10,000+ tokens) - Use higher limits for handling structured data (e.g., JSON, CSV) or processing large text inputs. Note: For optimal performance, start with a lower limit and increase as needed. If not provided, the selected model default value is used. |
| Temperature | The randomness of the output. Lower values produce more focused and deterministic outputs, while higher values increase creativity and variation. Valid values range between 0.0 and 2.0.Note: If not provided, the selected model default value is used. |