Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 66 additions & 22 deletions .plans/JDBC_REMOVAL_PLAN.md
Original file line number Diff line number Diff line change
Expand Up @@ -278,8 +278,12 @@ is replaced.
| Step 9c — Swap telemetry imports | ✅ Open | #1131 |
| Step 10a — Replicate SFException, ExecTimeTelemetryData, etc. | ✅ Open | #1132 |
| Step 10b — Swap SFException imports | ✅ Open | #1134 |
| Step 10c — Remove SFSession/SFBaseSession | ⬜ TODO | — |
| Step 10d — Demote JDBC to test scope | ⬜ TODO | — |
| Step 10c — Remove SFSession from storage stack | ✅ Open | #1135 |
| Step 10c2 — Remove SFSession from exceptions + telemetry | ✅ Open | #1136 |
| Step 11a — Replace JDBC HTTP calls with HttpRequestHelper | ⬜ TODO | — |
| Step 11b — Remove FQN SnowflakeSQLException from throws | ⬜ TODO | — |
| Step 11c — Clean up remaining FQN JDBC references | ⬜ TODO | — |
| Step 11d — Demote JDBC to test scope | ⬜ TODO | — |

**Closed PRs:** #1117 (reverted 7b approach), #1122 (reverted 8c approach)
**Other PRs:** #1118 (error/exception tests on master), #1133 (Maven retry config)
Expand Down Expand Up @@ -616,34 +620,74 @@ NOT swapped — they interact with JDBC's `RestRequest.executeWithRetries()`.

---

### Step 10c — Remove SFSession/SFBaseSession ⬜ TODO
### Step 10c — Remove SFSession from storage stack ✅ Open (PR #1135)

SFSession/SFBaseSession are always null from ingest callers. Not feasible
to replicate (1498+1404 lines, 156-class transitive closure = 40K lines).
Need to remove these parameter types.
**Done:** Remove SFSession/SFBaseSession parameters and dead session-based
code from storage clients, interface, strategies, factory, agent, config.
Session was always null from ingest callers. -336 lines.

---

### Step 10dDemote JDBC to test scope ⬜ TODO
### Step 10c2Remove SFSession from exceptions + telemetry ✅ Open (PR #1136)

Remaining 27 JDBC imports after Step 10b (all unreplicable due to massive
dependency chains):
- `SFSession`/`SFBaseSession` (15) — parameter types, always null
- `HttpUtil` (2) — GCS client + TelemetryClient
- `RestRequest` (1) — GCS client
- `SnowflakeConnectionV1` (1) — TelemetryClient session path
- `SnowflakeSQLException` (JDBC's, 1) — TelemetryClient
- `ExecTimeTelemetryData`/`HttpResponseContextDto` (2) — GCS client
(replicated but interact with JDBC RestRequest)
- IB `Telemetry`/`TelemetryField`/`TelemetryUtil` (3) —
interact with session.getTelemetryClient()
- `SFSession` in `SnowflakeSQLLoggedException` (2) — parameter types
**Done:** Remove SFSession/SFBaseSession from SnowflakeSQLLoggedException
(all 15 constructors) and TelemetryClient (session-based code). Remove IB
telemetry dead code. Update all callers (~12 files). -339 lines.

Then:
After 10c2: 6 JDBC imports remain + ~70 FQN JDBC references in throws/params.

---

### Step 11a — Replace JDBC HTTP calls with HttpRequestHelper ⬜ TODO

Create `HttpRequestHelper` utility with retry logic (replaces JDBC's
`RestRequest.executeWithRetries` and `HttpUtil.executeGeneralRequest`).
Replace the 6 remaining JDBC imports:

- `TelemetryClient`: replace `HttpUtil.executeGeneralRequest()` +
`SnowflakeSQLException` catch
- `SnowflakeGCSClient`: replace `HttpUtil.getHttpClientWithoutDecompression()`,
`HttpUtil.getHttpClient()`, `RestRequest.executeWithRetries()`,
`HttpUtil.getSocketTimeout()`. Remove `ExecTimeTelemetryData`,
`HttpResponseContextDto`, `RestRequest` imports.

---

### Step 11b — Remove FQN SnowflakeSQLException from throws ⬜ TODO

Mechanical removal of `, net.snowflake.client.jdbc.SnowflakeSQLException`
from ~47 throws clauses across all replicated storage clients, interface,
strategies, factory, and GCS client.

---

### Step 11c — Clean up remaining FQN JDBC references ⬜ TODO

Swap remaining FQN JDBC type references to ingest versions:
- `net.snowflake.client.core.HttpClientSettingsKey` → `HttpClientSettingsKey`
(same package, 8 occurrences in S3HttpUtil, SnowflakeFileTransferAgent,
SnowflakeGCSClient, SnowflakeStorageClient)
- `net.snowflake.client.core.HttpProtocol` → `HttpProtocol` (same package,
1 occurrence in S3HttpUtil)
- `net.snowflake.client.core.OCSPMode` → `net.snowflake.ingest.utils.OCSPMode`
(2 occurrences in SnowflakeFileTransferAgent)
- `net.snowflake.client.jdbc.SnowflakeUtil.convertProxyPropertiesToHttpClientKey`
→ `StorageClientUtil.convertProxyPropertiesToHttpClientKey` (2 occurrences
in SnowflakeFileTransferAgent)
- `static import net.snowflake.client.core.HttpUtil.setSessionlessProxyForAzure`
→ replicate method in StorageClientUtil (4 occurrences in SnowflakeAzureClient)
- `net.snowflake.client.jdbc.cloud.storage.AwsSdkGCPSigner` — JDBC class
reference in GCSAccessStrategyAwsSdk (string constant + class reference,
3 occurrences)

---

### Step 11d — Demote JDBC to test scope ⬜ TODO

After all FQN references are cleaned up:
1. Demote `snowflake-jdbc-thin` to `test` scope in `pom.xml`
2. Remove JDBC shade relocation rules from Maven Shade plugin
3. Remove `snowflake-jdbc-thin` from `public_pom.xml`
4. Run full test suite
3. Run full test suite

---

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@
import java.util.LinkedList;
import java.util.Objects;
import java.util.concurrent.Future;
import net.snowflake.client.core.HttpUtil;
import net.snowflake.client.jdbc.SnowflakeSQLException;
import net.snowflake.ingest.streaming.internal.fileTransferAgent.HttpRequestHelper;
import net.snowflake.ingest.streaming.internal.fileTransferAgent.ObjectMapperFactory;
import net.snowflake.ingest.streaming.internal.fileTransferAgent.TelemetryThreadPool;
import net.snowflake.ingest.streaming.internal.fileTransferAgent.log.SFLogger;
import net.snowflake.ingest.streaming.internal.fileTransferAgent.log.SFLoggerFactory;
import net.snowflake.ingest.utils.Stopwatch;
import org.apache.http.HttpHeaders;
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
Expand Down Expand Up @@ -57,9 +57,6 @@ public class TelemetryClient implements Telemetry {
// false if meet any error when sending metrics
private boolean isTelemetryServiceAvailable = true;

// Retry timeout for the HTTP request
private static final int TELEMETRY_HTTP_RETRY_TIMEOUT_IN_SEC = 1000;

/**
* Constructor for creating a sessionless telemetry client
*
Expand Down Expand Up @@ -271,27 +268,26 @@ private boolean sendBatch() throws IOException {
post.setHeader("X-Snowflake-Authorization-Token-Type", this.authType);
post.setHeader(HttpHeaders.ACCEPT, "application/json");

String response = null;

try {
response =
HttpUtil.executeGeneralRequest(
post,
TELEMETRY_HTTP_RETRY_TIMEOUT_IN_SEC,
0,
(int) HttpUtil.getSocketTimeout().toMillis(),
0,
this.httpClient);
HttpResponse response = HttpRequestHelper.execute(this.httpClient, post);
int statusCode = response.getStatusLine().getStatusCode();
stopwatch.stop();
if (statusCode < 200 || statusCode >= 300) {
disableTelemetry(); // when got error like 404 or bad request, disable telemetry in this
// telemetry instance
logger.error(
"Telemetry request failed, HTTP status: {}, reason: {}",
statusCode,
response.getStatusLine().getReasonPhrase());
return false;
}
logger.debug(
"Sending telemetry took {} ms. Batch size: {}",
stopwatch.elapsedMillis(),
tmpList.size());
} catch (SnowflakeSQLException e) {
disableTelemetry(); // when got error like 404 or bad request, disable telemetry in this
// telemetry instance
logger.error(
"Telemetry request failed, response: {}, exception: {}", response, e.getMessage());
} catch (IOException e) {
disableTelemetry();
logger.error("Telemetry request failed, exception: {}", e.getMessage());
return false;
}
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
/*
* Utility class replacing JDBC's RestRequest.executeWithRetries and HttpUtil.executeGeneralRequest.
* Provides HTTP request execution with exponential backoff retry logic.
*/
package net.snowflake.ingest.streaming.internal.fileTransferAgent;

import java.io.IOException;
import net.snowflake.ingest.streaming.internal.fileTransferAgent.log.SFLogger;
import net.snowflake.ingest.streaming.internal.fileTransferAgent.log.SFLoggerFactory;
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpRequestBase;
import org.apache.http.impl.client.CloseableHttpClient;

/**
* A simple HTTP request executor with retry logic, replacing JDBC's RestRequest.executeWithRetries
* and HttpUtil.executeGeneralRequest.
*/
public final class HttpRequestHelper {
private static final SFLogger logger = SFLoggerFactory.getLogger(HttpRequestHelper.class);

private static final int DEFAULT_MAX_RETRIES = 7;
private static final long BASE_BACKOFF_MILLIS = 1000L;
private static final long MAX_BACKOFF_MILLIS = 16000L;

private HttpRequestHelper() {}

/**
* Execute an HTTP request with default retry settings (7 retries, no 403 retry).
*
* @param httpClient the HTTP client to use
* @param httpRequest the request to execute
* @return the HTTP response
* @throws IOException if all retries are exhausted
*/
public static HttpResponse execute(CloseableHttpClient httpClient, HttpRequestBase httpRequest)
throws IOException {
return execute(httpClient, httpRequest, DEFAULT_MAX_RETRIES, false);
}

/**
* Execute an HTTP request with retry logic using exponential backoff with jitter.
*
* <p>Retries on:
*
* <ul>
* <li>IOException (connection failures, timeouts)
* <li>HTTP 5xx (server errors)
* <li>HTTP 408 (request timeout)
* <li>HTTP 429 (too many requests)
* <li>Optionally HTTP 403 (forbidden, useful for presigned URL retries)
* </ul>
*
* @param httpClient the HTTP client to use
* @param httpRequest the request to execute
* @param maxRetries maximum number of retry attempts
* @param retryOnHTTP403 whether to retry on HTTP 403 responses
* @return the HTTP response
* @throws IOException if all retries are exhausted
*/
public static HttpResponse execute(
CloseableHttpClient httpClient,
HttpRequestBase httpRequest,
int maxRetries,
boolean retryOnHTTP403)
throws IOException {
IOException lastException = null;

for (int attempt = 0; attempt <= maxRetries; attempt++) {
try {
HttpResponse response = httpClient.execute(httpRequest);
int statusCode = response.getStatusLine().getStatusCode();

if (isRetryableStatusCode(statusCode, retryOnHTTP403)) {
if (attempt < maxRetries) {
logger.debug(
"Retryable HTTP status {} on attempt {}/{}", statusCode, attempt + 1, maxRetries);
sleep(backoffMillis(attempt));
continue;
}
// Last attempt, return whatever we got
}
return response;
} catch (IOException e) {
lastException = e;
if (attempt < maxRetries) {
logger.debug("IOException on attempt {}/{}: {}", attempt + 1, maxRetries, e.getMessage());
sleep(backoffMillis(attempt));
}
}
}

throw new IOException(
"HTTP request failed after " + (maxRetries + 1) + " attempts", lastException);
}

private static boolean isRetryableStatusCode(int statusCode, boolean retryOnHTTP403) {
if (statusCode >= 500) {
return true;
}
if (statusCode == 408 || statusCode == 429) {
return true;
}
if (retryOnHTTP403 && statusCode == 403) {
return true;
}
return false;
}

/**
* Compute backoff with jitter. Base is 1s, doubles each attempt, capped at 16s. Jitter is +/-
* 25%.
*/
private static long backoffMillis(int attempt) {
long backoff = Math.min(BASE_BACKOFF_MILLIS << attempt, MAX_BACKOFF_MILLIS);
// Add jitter: +/- 25%
long jitter = (long) (backoff * 0.25 * (2 * Math.random() - 1));
return Math.max(0, backoff + jitter);
}

private static void sleep(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Loading
Loading