Execute SQL Statement
Execute a SQL statement and optionally await its results for a specified time.
External Documentation
To learn more, visit the Databricks documentation.
Basic Parameters
Parameter | Description |
---|---|
Disposition | Statements executed with INLINE disposition will return result data inline, in JSON_ARRAY format, in a series of chunks. If a given statement produces a result set with a size larger than 25 MiB, that statement execution is aborted, and no result set will be available.Statements executed with EXTERNAL_LINKS disposition will return result data as external links: URLs that point to cloud storage internal to the workspace. Using EXTERNAL_LINKS disposition allows statements to generate arbitrarily sized result sets for fetching up to 100 GiB.For further information regarding the Disposition parameter, please refer to Databricks Documentation. |
Format | Result format.For further information on this parameter including it's relation with the Disposition parameter please refer to Databricks Documentation. |
Statement | The SQL statement to execute. The statement can optionally be parameterized, see Statement Parameters . |
Statement Parameters | A list of parameters to pass into a SQL statement containing parameter markers. For further information on this parameter, please refer to Databricks Documentation. |
Warehouse ID | Warehouse upon which to execute a statement. Can be obtained via the List Warehouses action. |
Advanced Parameters
Parameter | Description |
---|---|
Byte Limit | Applies the given byte limit to the statement's result size. If the result was truncated due to the byte limit, then truncated in the response is set to true . When using EXTERNAL_LINKS disposition, a default limit of 100 GiB is applied if this parameter is not explicitly set.Note: Byte counts are based on internal data representations and might not match the final size in the requested format. |
Catalog | Sets default catalog for statement execution, similar to USE CATALOG in SQL. For further information on this parameter, please refer to Databricks Documentation. |
On Timeout | When Timeout > 0s , the call will block up to the specified time.If the statement execution doesn't finish within this time, On Timeout determines whether the execution should continue or be canceled.When set to CONTINUE , the statement execution continues asynchronously and the call returns a Statement ID which can be used for polling with Get Statement .When set to CANCEL , the statement execution is canceled and the call returns with a CANCELED state.For further information on this parameter, please refer to Databricks Documentation. |
Row Limit | Applies the given row limit to the statement's result set, but unlike the LIMIT clause in SQL, it also sets the truncated field in the response to indicate whether the result was trimmed due to the limit or not. |
Schema | Sets default schema for statement execution, similar to USE SCHEMA in SQL. For further information on this parameter, please refer to Databricks Documentation. |
Timeout | The time in seconds the call will wait for the statement's result. The valid range is 5s - 50s .When set to 0s, the statement will execute in asynchronous mode and the call will not wait for the execution to finish. In this case, the call returns directly with PENDING state and a Statement ID .If the statement takes longer to execute, On Timeout parameter determines what should happen after the timeout is reached.For further information on this parameter, please refer to Databricks Documentation. |
Example Output
{
"statement_id": "01eda0e7-e315-1846-84e2-79a963ffad44",
"status": {
"state": "SUCCEEDED"
},
"manifest": {
"format": "JSON_ARRAY",
"schema": {
"column_count": 1,
"columns": [
{
"name": "id",
"position": 0,
"type_name": "LONG",
"type_text": "BIGINT"
}
]
}
},
"result": {
"chunk_index": 0,
"row_offset": 0,
"row_count": 3,
"data_array": [
[
"0"
],
[
"1"
],
[
"2"
]
]
}
}
Workflow Library Example
Execute Sql Statement with Databricks and Send Results Via Email
Preview this Workflow on desktop