Skip to main content

Update BigQuery Table

Updates information in an existing table. The update method replaces the entire table resource, whereas the patch method only replaces fields that are provided in the submitted table resource.

External Documentation

To learn more, visit the GCP documentation.

Basic Parameters

ParameterDescription
Dataset IDDataset ID of the table to update.
Project IDProject ID of the table to update.
Table IDTable ID of the table to update.

Advanced Parameters

ParameterDescription
Clustering Fields[Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
Default Collation[Output-only] The default collation of the table.
Description[Optional] A user-friendly description of this table.
Encryption Configuration Kms Key Name[Optional] Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
Expiration Time[Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.
External Data ConfigurationOptional.
Describes the data format, location, and other properties of a table stored outside of BigQuery.
By defining these properties, the data source can then be queried as if it were a standard BigQuery table.

For information on how to format the object, go to https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#ExternalDataConfiguration
Friendly Name[Optional] A descriptive name for this table.
Kind[Output-only] The type of the resource.
LabelsThe labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
Materialized View Enable Refresh[Optional][TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
Materialized View Query[Required] A query whose result is persisted.
Materialized View Refresh Interval Ms[Optional][TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
Require Partition Filter[Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
Schema FieldsDescribes the fields in a table.
Table Reference Dataset ID[Required] The ID of the dataset containing this table.
Table Reference Project ID[Required] The ID of the project containing this table.
Table Reference Table ID[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
Time Partitioning Expiration Ms[Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
Time Partitioning Field[Beta][Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
Time Partitioning Type[Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
View Query[Required] A query that BigQuery executes when the view is referenced.
View Use Legacy SqlSpecifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
View User Defined Function ResourcesDescribes user-defined function resources used in the query.

Example Output

{
"clustering": {
"fields": [
"string"
]
},
"creationTime": "[Output-only] The time when this table was created, in milliseconds since the epoch.",
"defaultCollation": "[Output-only] The default collation of the table.",
"description": "[Optional] A user-friendly description of this table.",
"encryptionConfiguration": {
"kmsKeyName": "[Optional] Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key."
},
"etag": "[Output-only] A hash of the table metadata. Used to ensure there were no concurrent modifications to the resource when attempting an update. Not guaranteed to change when the table contents or the fields numRows, numBytes, numLongTermBytes or lastModifiedTime change.",
"expirationTime": "[Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.",
"externalDataConfiguration": {
"autodetect": false,
"avroOptions": {
"useAvroLogicalTypes": false
},
"bigtableOptions": {
"columnFamilies": [
{
"columns": [
{
"encoding": "[Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.",
"fieldName": "[Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.",
"onlyReadLatest": false,
"qualifierEncoded": "[Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.",
"qualifierString": "string",
"type": "[Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels."
}
],
"encoding": "[Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.",
"familyId": "Identifier of the column family.",
"onlyReadLatest": false,
"type": "[Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it."
}
],
"ignoreUnspecifiedColumnFamilies": false,
"readRowkeyAsString": false
},
"compression": "[Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.",
"connectionId": "[Optional, Trusted Tester] Connection for external data source.",
"csvOptions": {
"allowJaggedRows": false,
"allowQuotedNewlines": false,
"encoding": "[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.",
"fieldDelimiter": "[Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence \"\\t\" to specify a tab separator. The default value is a comma (',').",
"null_marker": "[Optional] An custom string that will represent a NULL value in CSV import data.",
"quote": "\"",
"skipLeadingRows": "[Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N \u003e 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema."
},
"decimalTargetTypes": [
"string"
],
"googleSheetsOptions": {
"range": "[Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20",
"skipLeadingRows": "[Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N \u003e 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema."
},
"hivePartitioningOptions": {
"mode": "[Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.",
"requirePartitionFilter": false,
"sourceUriPrefix": "[Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter)."
},
"ignoreUnknownValues": false,
"maxBadRecords": 0,
"parquetOptions": {
"enableListInference": false,
"enumAsString": false
},
"schema": {
"fields": [
{
"categories": {
"names": [
"string"
]
},
"collationSpec": "Optional. Collation specification of the field. It only can be set on string type field.",
"description": "[Optional] The field description. The maximum length is 1,024 characters.",
"fields": [
null
],
"maxLength": "[Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = \"STRING\", then max_length represents the maximum UTF-8 length of strings in this field. If type = \"BYTES\", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠ \"STRING\" and ≠ \"BYTES\".",
"mode": "[Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.",
"name": "[Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.",
"policyTags": {
"names": [
"string"
]
},
"precision": "[Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠ \"NUMERIC\" and ≠ \"BIGNUMERIC\". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = \"NUMERIC\": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = \"BIGNUMERIC\": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = \"NUMERIC\": 1 ≤ precision ≤ 29. - If type = \"BIGNUMERIC\": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.",
"scale": "[Optional] See documentation for precision.",
"type": "[Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD)."
}
]
},
"sourceFormat": "[Required] The data format. For CSV files, specify \"CSV\". For Google sheets, specify \"GOOGLE_SHEETS\". For newline-delimited JSON, specify \"NEWLINE_DELIMITED_JSON\". For Avro files, specify \"AVRO\". For Google Cloud Datastore backups, specify \"DATASTORE_BACKUP\". [Beta] For Google Cloud Bigtable, specify \"BIGTABLE\".",
"sourceUris": [
"string"
]
},
"friendlyName": "[Optional] A descriptive name for this table.",
"id": "[Output-only] An opaque ID uniquely identifying the table.",
"kind": "bigquery#table",
"labels": {},
"lastModifiedTime": "[Output-only] The time when this table was last modified, in milliseconds since the epoch.",
"location": "[Output-only] The geographic location where the table resides. This value is inherited from the dataset.",
"materializedView": {
"enableRefresh": false,
"lastRefreshTime": "[Output-only] [TrustedTester] The time when this materialized view was last modified, in milliseconds since the epoch.",
"query": "[Required] A query whose result is persisted.",
"refreshIntervalMs": "[Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is \"1800000\" (30 minutes)."
},
"model": {
"modelOptions": {
"labels": [
"string"
],
"lossType": "string",
"modelType": "string"
},
"trainingRuns": [
{
"iterationResults": [
{
"durationMs": "[Output-only, Beta] Time taken to run the training iteration in milliseconds.",
"evalLoss": 0,
"index": 0,
"learnRate": 0,
"trainingLoss": 0
}
],
"startTime": "[Output-only, Beta] Training run start time in milliseconds since the epoch.",
"state": "[Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.",
"trainingOptions": {
"earlyStop": false,
"l1Reg": 0,
"l2Reg": 0,
"learnRate": 0,
"learnRateStrategy": "string",
"lineSearchInitLearnRate": 0,
"maxIteration": "int64",
"minRelProgress": 0,
"warmStart": false
}
}
]
},
"numBytes": "[Output-only] The size of this table in bytes, excluding any data in the streaming buffer.",
"numLongTermBytes": "[Output-only] The number of bytes in the table that are considered \"long-term storage\".",
"numPhysicalBytes": "[Output-only] [TrustedTester] The physical size of this table in bytes, excluding any data in the streaming buffer. This includes compression and storage used for time travel.",
"numRows": "[Output-only] The number of rows of data in this table, excluding any data in the streaming buffer.",
"rangePartitioning": {
"field": "[TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.",
"range": {
"end": "[TrustedTester] [Required] The end of range partitioning, exclusive.",
"interval": "[TrustedTester] [Required] The width of each interval.",
"start": "[TrustedTester] [Required] The start of range partitioning, inclusive."
}
},
"requirePartitionFilter": false,
"schema": {
"fields": [
{
"categories": {
"names": [
"string"
]
},
"collationSpec": "Optional. Collation specification of the field. It only can be set on string type field.",
"description": "[Optional] The field description. The maximum length is 1,024 characters.",
"fields": [
null
],
"maxLength": "[Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = \"STRING\", then max_length represents the maximum UTF-8 length of strings in this field. If type = \"BYTES\", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠ \"STRING\" and ≠ \"BYTES\".",
"mode": "[Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.",
"name": "[Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.",
"policyTags": {
"names": [
"string"
]
},
"precision": "[Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠ \"NUMERIC\" and ≠ \"BIGNUMERIC\". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = \"NUMERIC\": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = \"BIGNUMERIC\": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = \"NUMERIC\": 1 ≤ precision ≤ 29. - If type = \"BIGNUMERIC\": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.",
"scale": "[Optional] See documentation for precision.",
"type": "[Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD)."
}
]
},
"selfLink": "[Output-only] A URL that can be used to access this resource again.",
"snapshotDefinition": {
"baseTableReference": {
"datasetId": "[Required] The ID of the dataset containing this table.",
"projectId": "[Required] The ID of the project containing this table.",
"tableId": "[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters."
},
"snapshotTime": "[Required] The time at which the base table was snapshot. This value is reported in the JSON response using RFC3339 format."
},
"streamingBuffer": {
"estimatedBytes": "[Output-only] A lower-bound estimate of the number of bytes currently in the streaming buffer.",
"estimatedRows": "[Output-only] A lower-bound estimate of the number of rows currently in the streaming buffer.",
"oldestEntryTime": "[Output-only] Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available."
},
"tableReference": {
"datasetId": "[Required] The ID of the dataset containing this table.",
"projectId": "[Required] The ID of the project containing this table.",
"tableId": "[Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters."
},
"timePartitioning": {
"expirationMs": "[Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.",
"field": "[Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.",
"requirePartitionFilter": false,
"type": "[Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY."
},
"type": "[Output-only] Describes the table type. The following values are supported: TABLE: A normal BigQuery table. VIEW: A virtual table defined by a SQL query. SNAPSHOT: An immutable, read-only table that is a copy of another table. [TrustedTester] MATERIALIZED_VIEW: SQL query whose result is persisted. EXTERNAL: A table that references data stored in an external storage system, such as Google Cloud Storage. The default value is TABLE.",
"view": {
"query": "[Required] A query that BigQuery executes when the view is referenced.",
"useExplicitColumnNames": false,
"useLegacySql": false,
"userDefinedFunctionResources": [
{
"inlineCode": "[Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.",
"resourceUri": "[Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path)."
}
]
}
}

Workflow Library Example

Update Bigquery Table with Gcp and Send Results Via Email

Workflow LibraryPreview this Workflow on desktop