Skip to content

Commit 613d11b

Browse files
authored
Upgrade Go SDK to 0.99.0 (#4348)
## Changes Upgrade Go SDK to 0.99.0 ## Why We will upgrade to a new 0.100.0 Go SDK tomorrow and corresponding TF provider. Tis PR is to reduce a scope of upgrading and not to jump from 0.96 to 0.100.0 ## Tests Tests pass <!-- If your PR needs to be included in the release notes for next release, add a separate entry in NEXT_CHANGELOG.md as part of your PR. -->
1 parent 3bcf3e7 commit 613d11b

File tree

35 files changed

+1681
-129
lines changed

35 files changed

+1681
-129
lines changed

.codegen/_openapi_sha

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
dbf9b0a4e0432e846520442b14c34fc7f0ca0d8c
1+
76dbe1cb1a0a017a4484757cb4e542a30a87e9b3

acceptance/bundle/refschema/out.fields.txt

Lines changed: 82 additions & 0 deletions
Large diffs are not rendered by default.

acceptance/help/output.txt

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Real-time Serving
3333
serving-endpoints The Serving Endpoints API allows you to create, update, and delete model serving endpoints.
3434

3535
Apps
36-
apps Apps run directly on a customers Databricks instance, integrate with their data, use and extend Databricks services, and enable users to interact through single sign-on.
36+
apps Apps run directly on a customer's Databricks instance, integrate with their data, use and extend Databricks services, and enable users to interact through single sign-on.
3737

3838
Vector Search
3939
vector-search-endpoints **Endpoint**: Represents the compute resources to host vector search indexes.
@@ -81,7 +81,7 @@ Unity Catalog
8181
model-versions Databricks provides a hosted version of MLflow Model Registry in Unity Catalog.
8282
online-tables Online tables provide lower latency and higher QPS access to data from Delta tables.
8383
policies Attribute-Based Access Control (ABAC) provides high leverage governance for enforcing compliance policies in Unity Catalog.
84-
quality-monitors A monitor computes and monitors data or model quality metrics for a table over time.
84+
quality-monitors [DEPRECATED] This API is deprecated.
8585
registered-models Databricks provides a hosted version of MLflow Model Registry in Unity Catalog.
8686
resource-quotas Unity Catalog enforces resource quotas on all securable objects, which limits the number of resources that can be created.
8787
rfa Request for Access enables users to request access for Unity Catalog securables.
@@ -136,7 +136,7 @@ Clean Rooms
136136
clean-rooms A clean room uses Delta Sharing and serverless compute to provide a secure and privacy-protecting environment where multiple parties can work together on sensitive enterprise data without direct access to each other's data.
137137

138138
Quality Monitor
139-
quality-monitor-v2 Manage data quality of UC objects (currently support schema).
139+
quality-monitor-v2 [DEPRECATED] This API is deprecated.
140140

141141
Data Quality Monitoring
142142
data-quality Manage the data quality of Unity Catalog objects (currently support schema and table).
@@ -149,6 +149,9 @@ Tags
149149
tag-policies The Tag Policy API allows you to manage policies for governed tags in Databricks.
150150
workspace-entity-tag-assignments Manage tag assignments on workspace-scoped objects.
151151

152+
Postgres
153+
postgres Use the Postgres API to create and manage Lakebase Autoscaling Postgres infrastructure, including projects, branches, compute endpoints, and roles.
154+
152155
Developer Tools
153156
bundle Databricks Asset Bundles let you express data/AI/analytics projects as code.
154157
sync Synchronize a local directory to a workspace directory

bundle/direct/dresources/cluster.go

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ func (r *ResourceCluster) RemapState(input *compute.ClusterDetails) *compute.Clu
4444
DockerImage: input.DockerImage,
4545
DriverInstancePoolId: input.DriverInstancePoolId,
4646
DriverNodeTypeId: input.DriverNodeTypeId,
47+
DriverNodeTypeFlexibility: input.DriverNodeTypeFlexibility,
4748
EnableElasticDisk: input.EnableElasticDisk,
4849
EnableLocalDiskEncryption: input.EnableLocalDiskEncryption,
4950
GcpAttributes: input.GcpAttributes,
@@ -64,6 +65,7 @@ func (r *ResourceCluster) RemapState(input *compute.ClusterDetails) *compute.Clu
6465
TotalInitialRemoteDiskSize: input.TotalInitialRemoteDiskSize,
6566
UseMlRuntime: input.UseMlRuntime,
6667
WorkloadType: input.WorkloadType,
68+
WorkerNodeTypeFlexibility: input.WorkerNodeTypeFlexibility,
6769
ForceSendFields: utils.FilterFields[compute.ClusterSpec](input.ForceSendFields),
6870
}
6971
if input.Spec != nil {
@@ -159,6 +161,7 @@ func makeCreateCluster(config *compute.ClusterSpec) compute.CreateCluster {
159161
DockerImage: config.DockerImage,
160162
DriverInstancePoolId: config.DriverInstancePoolId,
161163
DriverNodeTypeId: config.DriverNodeTypeId,
164+
DriverNodeTypeFlexibility: config.DriverNodeTypeFlexibility,
162165
EnableElasticDisk: config.EnableElasticDisk,
163166
EnableLocalDiskEncryption: config.EnableLocalDiskEncryption,
164167
GcpAttributes: config.GcpAttributes,
@@ -179,6 +182,7 @@ func makeCreateCluster(config *compute.ClusterSpec) compute.CreateCluster {
179182
TotalInitialRemoteDiskSize: config.TotalInitialRemoteDiskSize,
180183
UseMlRuntime: config.UseMlRuntime,
181184
WorkloadType: config.WorkloadType,
185+
WorkerNodeTypeFlexibility: config.WorkerNodeTypeFlexibility,
182186
ForceSendFields: utils.FilterFields[compute.CreateCluster](config.ForceSendFields),
183187
}
184188

@@ -206,6 +210,7 @@ func makeEditCluster(id string, config *compute.ClusterSpec) compute.EditCluster
206210
DockerImage: config.DockerImage,
207211
DriverInstancePoolId: config.DriverInstancePoolId,
208212
DriverNodeTypeId: config.DriverNodeTypeId,
213+
DriverNodeTypeFlexibility: config.DriverNodeTypeFlexibility,
209214
EnableElasticDisk: config.EnableElasticDisk,
210215
EnableLocalDiskEncryption: config.EnableLocalDiskEncryption,
211216
GcpAttributes: config.GcpAttributes,
@@ -226,6 +231,7 @@ func makeEditCluster(id string, config *compute.ClusterSpec) compute.EditCluster
226231
TotalInitialRemoteDiskSize: config.TotalInitialRemoteDiskSize,
227232
UseMlRuntime: config.UseMlRuntime,
228233
WorkloadType: config.WorkloadType,
234+
WorkerNodeTypeFlexibility: config.WorkerNodeTypeFlexibility,
229235
ForceSendFields: utils.FilterFields[compute.EditCluster](config.ForceSendFields),
230236
}
231237

bundle/internal/schema/annotations_openapi.yml

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -242,6 +242,9 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
242242
The optional ID of the instance pool for the driver of the cluster belongs.
243243
The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not
244244
assigned.
245+
"driver_node_type_flexibility":
246+
"description": |-
247+
Flexible node type configuration for the driver node.
245248
"driver_node_type_id":
246249
"description": |-
247250
The node type of the Spark driver.
@@ -356,6 +359,9 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
356359
This field can only be used when `kind = CLASSIC_PREVIEW`.
357360
358361
`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
362+
"worker_node_type_flexibility":
363+
"description": |-
364+
Flexible node type configuration for worker nodes.
359365
"workload_type":
360366
"description": |-
361367
Cluster Attributes showing for clusters workload types.
@@ -402,45 +408,61 @@ github.com/databricks/cli/bundle/config/resources.DatabaseInstance:
402408
"effective_capacity":
403409
"description": |-
404410
Deprecated. The sku of the instance; this field will always match the value of capacity.
411+
This is an output only field that contains the value computed from the input field combined with
412+
server side defaults. Use the field without the effective_ prefix to set the value.
405413
"deprecation_message": |-
406414
This field is deprecated
407415
"x-databricks-field-behaviors_output_only": |-
408416
true
409417
"effective_custom_tags":
410418
"description": |-
411419
The recorded custom tags associated with the instance.
420+
This is an output only field that contains the value computed from the input field combined with
421+
server side defaults. Use the field without the effective_ prefix to set the value.
412422
"x-databricks-field-behaviors_output_only": |-
413423
true
414424
"effective_enable_pg_native_login":
415425
"description": |-
416426
Whether the instance has PG native password login enabled.
427+
This is an output only field that contains the value computed from the input field combined with
428+
server side defaults. Use the field without the effective_ prefix to set the value.
417429
"x-databricks-field-behaviors_output_only": |-
418430
true
419431
"effective_enable_readable_secondaries":
420432
"description": |-
421433
Whether secondaries serving read-only traffic are enabled. Defaults to false.
434+
This is an output only field that contains the value computed from the input field combined with
435+
server side defaults. Use the field without the effective_ prefix to set the value.
422436
"x-databricks-field-behaviors_output_only": |-
423437
true
424438
"effective_node_count":
425439
"description": |-
426440
The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to
427441
1 primary and 0 secondaries.
442+
This is an output only field that contains the value computed from the input field combined with
443+
server side defaults. Use the field without the effective_ prefix to set the value.
428444
"x-databricks-field-behaviors_output_only": |-
429445
true
430446
"effective_retention_window_in_days":
431447
"description": |-
432448
The retention window for the instance. This is the time window in days
433449
for which the historical data is retained.
450+
This is an output only field that contains the value computed from the input field combined with
451+
server side defaults. Use the field without the effective_ prefix to set the value.
434452
"x-databricks-field-behaviors_output_only": |-
435453
true
436454
"effective_stopped":
437455
"description": |-
438456
Whether the instance is stopped.
457+
This is an output only field that contains the value computed from the input field combined with
458+
server side defaults. Use the field without the effective_ prefix to set the value.
439459
"x-databricks-field-behaviors_output_only": |-
440460
true
441461
"effective_usage_policy_id":
442462
"description": |-
443463
The policy that is applied to the instance.
464+
This is an output only field that contains the value computed from the input field combined with
465+
server side defaults. Use the field without the effective_ prefix to set the value.
444466
"x-databricks-field-behaviors_output_only": |-
445467
true
446468
"enable_pg_native_login":
@@ -990,11 +1012,15 @@ github.com/databricks/cli/bundle/config/resources.SyncedDatabaseTable:
9901012
"description": |-
9911013
The name of the database instance that this table is registered to. This field is always returned, and for
9921014
tables inside database catalogs is inferred database instance associated with the catalog.
1015+
This is an output only field that contains the value computed from the input field combined with
1016+
server side defaults. Use the field without the effective_ prefix to set the value.
9931017
"x-databricks-field-behaviors_output_only": |-
9941018
true
9951019
"effective_logical_database_name":
9961020
"description": |-
9971021
The name of the logical database that this table is registered to.
1022+
This is an output only field that contains the value computed from the input field combined with
1023+
server side defaults. Use the field without the effective_ prefix to set the value.
9981024
"x-databricks-field-behaviors_output_only": |-
9991025
true
10001026
"logical_database_name":
@@ -1790,6 +1816,9 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
17901816
The optional ID of the instance pool for the driver of the cluster belongs.
17911817
The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not
17921818
assigned.
1819+
"driver_node_type_flexibility":
1820+
"description": |-
1821+
Flexible node type configuration for the driver node.
17931822
"driver_node_type_id":
17941823
"description": |-
17951824
The node type of the Spark driver.
@@ -1904,6 +1933,9 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
19041933
This field can only be used when `kind = CLASSIC_PREVIEW`.
19051934
19061935
`effective_spark_version` is determined by `spark_version` (DBR release), this field `use_ml_runtime`, and whether `node_type_id` is gpu node or not.
1936+
"worker_node_type_flexibility":
1937+
"description": |-
1938+
Flexible node type configuration for worker nodes.
19071939
"workload_type":
19081940
"description": |-
19091941
Cluster Attributes showing for clusters workload types.
@@ -2164,6 +2196,13 @@ github.com/databricks/databricks-sdk-go/service/compute.MavenLibrary:
21642196
"description": |-
21652197
Maven repo to install the Maven package from. If omitted, both Maven Central Repository
21662198
and Spark Packages are searched.
2199+
github.com/databricks/databricks-sdk-go/service/compute.NodeTypeFlexibility:
2200+
"_":
2201+
"description": |-
2202+
Configuration for flexible node types, allowing fallback to alternate node types during cluster launch and upscale.
2203+
"alternate_node_type_ids":
2204+
"description": |-
2205+
A list of node type IDs to use as fallbacks when the primary node type is unavailable.
21672206
github.com/databricks/databricks-sdk-go/service/compute.PythonPyPiLibrary:
21682207
"package":
21692208
"description": |-
@@ -2287,6 +2326,8 @@ github.com/databricks/databricks-sdk-go/service/database.DatabaseInstanceRef:
22872326
instance was created.
22882327
For a child ref instance, this is the LSN on the instance from which the child instance
22892328
was created.
2329+
This is an output only field that contains the value computed from the input field combined with
2330+
server side defaults. Use the field without the effective_ prefix to set the value.
22902331
"x-databricks-field-behaviors_output_only": |-
22912332
true
22922333
"lsn":
@@ -2904,16 +2945,20 @@ github.com/databricks/databricks-sdk-go/service/jobs.JobDeployment:
29042945
The kind of deployment that manages the job.
29052946
29062947
* `BUNDLE`: The job is managed by Databricks Asset Bundle.
2948+
* `SYSTEM_MANAGED`: The job is managed by Databricks and is read-only.
29072949
"metadata_file_path":
29082950
"description": |-
29092951
Path of the file that contains deployment metadata.
29102952
github.com/databricks/databricks-sdk-go/service/jobs.JobDeploymentKind:
29112953
"_":
29122954
"description": |-
29132955
* `BUNDLE`: The job is managed by Databricks Asset Bundle.
2956+
* `SYSTEM_MANAGED`: The job is managed by Databricks and is read-only.
29142957
"enum":
29152958
- |-
29162959
BUNDLE
2960+
- |-
2961+
SYSTEM_MANAGED
29172962
github.com/databricks/databricks-sdk-go/service/jobs.JobEditMode:
29182963
"_":
29192964
"description": |-
@@ -3766,6 +3811,18 @@ github.com/databricks/databricks-sdk-go/service/ml.ModelTag:
37663811
"value":
37673812
"description": |-
37683813
The tag value.
3814+
github.com/databricks/databricks-sdk-go/service/pipelines.AutoFullRefreshPolicy:
3815+
"_":
3816+
"description": |-
3817+
Policy for auto full refresh.
3818+
"enabled":
3819+
"description": |-
3820+
(Required, Mutable) Whether to enable auto full refresh or not.
3821+
"min_interval_hours":
3822+
"description": |-
3823+
(Optional, Mutable) Specify the minimum interval in hours between the timestamp
3824+
at which a table was last full refreshed and the current timestamp for triggering auto full
3825+
If unspecified and autoFullRefresh is enabled then by default min_interval_hours is 24 hours.
37693826
github.com/databricks/databricks-sdk-go/service/pipelines.ConnectionParameters:
37703827
"source_catalog":
37713828
"description": |-
@@ -3869,6 +3926,9 @@ github.com/databricks/databricks-sdk-go/service/pipelines.IngestionPipelineDefin
38693926
"connection_name":
38703927
"description": |-
38713928
Immutable. The Unity Catalog connection that this ingestion pipeline uses to communicate with the source. This is used with connectors for applications like Salesforce, Workday, and so on.
3929+
"full_refresh_window":
3930+
"description": |-
3931+
(Optional) A window that specifies a set of time ranges for snapshot queries in CDC.
38723932
"ingest_from_uc_foreign_catalog":
38733933
"description": |-
38743934
Immutable. If set to true, the pipeline will ingest tables from the
@@ -4025,6 +4085,21 @@ github.com/databricks/databricks-sdk-go/service/pipelines.Notifications:
40254085
"email_recipients":
40264086
"description": |-
40274087
A list of email addresses notified when a configured alert is triggered.
4088+
github.com/databricks/databricks-sdk-go/service/pipelines.OperationTimeWindow:
4089+
"_":
4090+
"description": |-
4091+
Proto representing a window
4092+
"days_of_week":
4093+
"description": |-
4094+
Days of week in which the window is allowed to happen
4095+
If not specified all days of the week will be used.
4096+
"start_hour":
4097+
"description": |-
4098+
An integer between 0 and 23 denoting the start hour for the window in the 24-hour day.
4099+
"time_zone_id":
4100+
"description": |-
4101+
Time zone id of window. See https://docs.databricks.com/sql/language-manual/sql-ref-syntax-aux-conf-mgmt-set-timezone.html for details.
4102+
If not specified, UTC will be used.
40284103
github.com/databricks/databricks-sdk-go/service/pipelines.PathPattern:
40294104
"include":
40304105
"description": |-
@@ -4313,6 +4388,19 @@ github.com/databricks/databricks-sdk-go/service/pipelines.TableSpec:
43134388
"description": |-
43144389
Configuration settings to control the ingestion of tables. These settings override the table_configuration defined in the IngestionPipelineDefinition object and the SchemaSpec.
43154390
github.com/databricks/databricks-sdk-go/service/pipelines.TableSpecificConfig:
4391+
"auto_full_refresh_policy":
4392+
"description": |-
4393+
(Optional, Mutable) Policy for auto full refresh, if enabled pipeline will automatically try
4394+
to fix issues by doing a full refresh on the table in the retry run. auto_full_refresh_policy
4395+
in table configuration will override the above level auto_full_refresh_policy.
4396+
For example,
4397+
{
4398+
"auto_full_refresh_policy": {
4399+
"enabled": true,
4400+
"min_interval_hours": 23,
4401+
}
4402+
}
4403+
If unspecified, auto full refresh is disabled.
43164404
"exclude_columns":
43174405
"description": |-
43184406
A list of column names to be excluded for the ingestion.

bundle/internal/validation/generated/enum_fields.go

Lines changed: 2 additions & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)