Skip to content

Commit d00476d

Browse files
authored
Preserve profile fields on re-login instead of wiping all keys (#4597)
## Why `SaveToProfile` deleted **every key** in a `.databrickscfg` profile section before writing, then only wrote back the small subset of fields the caller explicitly set. This meant every `databricks auth login` or `databricks configure` silently destroyed fields like `cluster_id`, `warehouse_id`, `scopes`, `azure_environment`, and any custom keys the user had added to their profile. This is especially problematic as profiles carry more explicit fields in the host-agnostic auth work (`workspace_id`, `account_id`, `azure_environment`). Users shouldn't have to re-specify everything every time they re-login. ## Changes ### Core: `SaveToProfile` merge semantics (`libs/databrickscfg/ops.go`) **Before:** Delete all keys in the section, then write only non-zero fields from the new config. **After:** Existing keys not mentioned in the new config are preserved. Non-zero fields from the new config overwrite existing values. A new `clearKeys ...string` variadic parameter lets callers explicitly remove specific keys. ### `auth login` (`cmd/auth/login.go`) **Before:** Re-login wiped everything. `cluster_id` and `serverless_compute_id` were manually read back from the existing profile in the default case (no `--configure-cluster`/`--configure-serverless`), but `warehouse_id`, `azure_environment`, custom keys, etc. were always lost. **After:** - All non-auth fields are preserved automatically via merge semantics (no manual read-back needed). - Incompatible auth credentials (PAT token, basic auth, M2M client secrets, Azure/GCP credentials, metadata service URL, OIDC tokens) are explicitly cleared when switching to OAuth. - `--configure-cluster` explicitly clears `serverless_compute_id` (and vice versa) for mutual exclusion. - `experimental_is_unified_host` is explicitly cleared when `false` (since `false` is a zero value and would otherwise be skipped by the merge, leaving a stale `true` from a previous login). - **Scopes are preserved:** when `--scopes` is not passed, existing scopes from the profile are read back and used for the OAuth challenge. This means the minted token matches the profile's scope configuration. Previously, scopes were always wiped and the default `all-apis` was used. ### Inline login in `auth token` (`cmd/auth/token.go`) **Before:** `runInlineLogin` (the "create new profile" path in the interactive `auth token` flow) saved a minimal set of fields and wiped everything else. Did not handle scopes or `experimental_is_unified_host` clearing. **After:** - Same auth credential clearing as `auth login`. - `experimental_is_unified_host` explicitly cleared when `false`. - Existing profile scopes are read back and used for the OAuth challenge (same as `auth login`). ### `databricks configure` (`cmd/configure/configure.go`) **Before:** PAT configure wiped all keys including `auth_type`, `scopes`, `azure_environment`, etc. — which was correct for `auth_type`/`scopes` but destroyed useful non-auth fields. **After:** - Non-auth fields (`cluster_id`, `warehouse_id`, `azure_environment`, `account_id`, `workspace_id`, custom keys) are preserved. - OAuth metadata (`auth_type`, `scopes`, `databricks_cli_path`) is explicitly cleared — a PAT profile shouldn't keep OAuth artifacts. - All non-PAT auth credentials (basic auth, M2M, Azure, GCP, OIDC, metadata service) are explicitly cleared to prevent multi-auth conflicts. - `experimental_is_unified_host` is always cleared (PAT profiles don't use unified hosts). - `serverless_compute_id` is cleared when `cluster_id` is set — whether via `--configure-cluster` flag **or** via `DATABRICKS_CLUSTER_ID` env var (previously only the flag was checked). ### Profile struct (`libs/databrickscfg/profile/`) - Added `Scopes` field to `profile.Profile` struct and read it from the INI file in `LoadProfiles`. This allows `auth login` and `auth token` to read existing scopes back from the profile. ## Test plan **Unit tests (`libs/databrickscfg/ops_test.go`):** - [x] `TestSaveToProfile_MergePreservesExistingKeys` — token survives when only auth_type is written - [x] `TestSaveToProfile_ClearKeysRemovesSpecifiedKeys` — token and cluster_id cleared, serverless_compute_id added - [x] `TestSaveToProfile_OverwritesExistingValues` — host updated from old to new - [x] `TestSaveToProfile_ClearKeysOnNonExistentKeyIsNoop` — clearing nonexistent keys doesn't error **Unit tests (`cmd/configure/configure_test.go`):** - [x] `TestConfigureClearsOAuthAuthType` — PAT configure clears `auth_type` and `scopes` from a previously OAuth-configured profile - [x] `TestConfigureClearsUnifiedHostMetadata` — PAT configure clears `experimental_is_unified_host` while preserving `account_id`/`workspace_id` - [x] `TestConfigureClearsServerlessWhenClusterFromEnv` — `serverless_compute_id` cleared when `DATABRICKS_CLUSTER_ID` env var provides cluster_id **Acceptance test (`acceptance/cmd/auth/login/preserve-fields/`):** - [x] Profile with `cluster_id`, `warehouse_id`, `azure_environment`, and `custom_key` → all four survive `auth login` re-login without `--configure-cluster`/`--configure-serverless` **Existing tests:** - [x] All 7 `auth login` acceptance tests pass (including `configure-serverless` which verifies `cluster_id` is still cleared) - [x] All `cmd/auth/` unit tests pass - [x] All `cmd/configure/` unit tests pass - [x] `make checks` clean - [x] `make lintfull` clean (0 issues) 🤖 Generated with [Claude Code](https://claude.com/claude-code)
1 parent db6b7af commit d00476d

19 files changed

Lines changed: 349 additions & 90 deletions

File tree

acceptance/cmd/auth/login/preserve-fields/out.test.toml

Lines changed: 5 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
2+
=== Initial profile
3+
[DEFAULT]
4+
host = [DATABRICKS_URL]
5+
cluster_id = existing-cluster-123
6+
warehouse_id = warehouse-456
7+
azure_environment = USGOVERNMENT
8+
custom_key = my-custom-value
9+
10+
=== Run auth login (no --configure-cluster or --configure-serverless)
11+
>>> [CLI] auth login --host [DATABRICKS_URL] --profile DEFAULT
12+
Profile DEFAULT was successfully saved
13+
14+
=== Profile after auth login — all non-auth fields should be preserved
15+
[DEFAULT]
16+
host = [DATABRICKS_URL]
17+
cluster_id = existing-cluster-123
18+
warehouse_id = warehouse-456
19+
azure_environment = USGOVERNMENT
20+
custom_key = my-custom-value
21+
auth_type = databricks-cli
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
sethome "./home"
2+
3+
# Create an initial profile with cluster_id, warehouse_id, azure_environment,
4+
# and a custom key that is not a recognized SDK config attribute.
5+
cat > "./home/.databrickscfg" <<EOF
6+
[DEFAULT]
7+
host = $DATABRICKS_HOST
8+
cluster_id = existing-cluster-123
9+
warehouse_id = warehouse-456
10+
azure_environment = USGOVERNMENT
11+
custom_key = my-custom-value
12+
EOF
13+
14+
title "Initial profile\n"
15+
cat "./home/.databrickscfg"
16+
17+
# Use a fake browser that performs a GET on the authorization URL
18+
# and follows the redirect back to localhost.
19+
export BROWSER="browser.py"
20+
21+
title "Run auth login (no --configure-cluster or --configure-serverless)"
22+
trace $CLI auth login --host $DATABRICKS_HOST --profile DEFAULT
23+
24+
title "Profile after auth login — all non-auth fields should be preserved\n"
25+
cat "./home/.databrickscfg"
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
Ignore = [
2+
"home"
3+
]

acceptance/cmd/configure/clears-oauth-on-pat/out.test.toml

Lines changed: 5 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
2+
=== Initial profile (OAuth with unified host)
3+
[DEFAULT]
4+
host = https://host
5+
auth_type = databricks-cli
6+
scopes = all-apis
7+
experimental_is_unified_host = true
8+
account_id = acc-123
9+
workspace_id = ws-456
10+
11+
=== Run configure with PAT token
12+
>>> [CLI] configure --token --host https://host
13+
14+
=== Profile after PAT configure
15+
[DEFAULT]
16+
host = https://host
17+
account_id = acc-123
18+
workspace_id = ws-456
19+
token = [DATABRICKS_TOKEN]
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
sethome "./home"
2+
3+
# Pre-populate a profile with OAuth metadata and unified host fields
4+
# (as if `auth login` was previously run against a unified host).
5+
cat > "./home/.databrickscfg" <<EOF
6+
[DEFAULT]
7+
host = https://host
8+
auth_type = databricks-cli
9+
scopes = all-apis
10+
experimental_is_unified_host = true
11+
account_id = acc-123
12+
workspace_id = ws-456
13+
EOF
14+
15+
title "Initial profile (OAuth with unified host)\n"
16+
cat "./home/.databrickscfg"
17+
18+
title "Run configure with PAT token"
19+
echo "new-token" | trace $CLI configure --token --host https://host
20+
21+
title "Profile after PAT configure\n"
22+
cat "./home/.databrickscfg"
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
Ignore = [
2+
"home"
3+
]

acceptance/cmd/configure/clears-serverless-when-cluster-from-env/out.test.toml

Lines changed: 5 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
2+
=== Initial profile with serverless_compute_id
3+
[DEFAULT]
4+
host = https://host
5+
serverless_compute_id = auto
6+
7+
=== Run configure with cluster from env var
8+
>>> [CLI] configure --token --host https://host
9+
10+
=== Profile after configure (serverless should be cleared)
11+
[DEFAULT]
12+
host = https://host
13+
cluster_id = env-cluster-789
14+
token = [DATABRICKS_TOKEN]

0 commit comments

Comments
 (0)