Conversation
WalkthroughThe PR adds opt-in throttling retries controlled by a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant REST API
participant Response Handler
participant Retry Logic
Client->>REST API: HTTP Request (attempt 1)
REST API-->>Response Handler: HTTP 429 Response<br/>(Throttled)
Response Handler->>Retry Logic: Check RetryConfig<br/>(MaxRetries, MaxWaitSeconds)
alt Retries Enabled
Retry Logic->>Retry Logic: Parse Retry-After<br/>or compute expBackoff()
Retry Logic->>Retry Logic: Wait with backoff<br/>(respecting cap)
Retry Logic->>REST API: HTTP Request (attempt 2)
REST API-->>Response Handler: HTTP 200 OK
Response Handler->>Client: Return data
else Retries Disabled
Response Handler->>Client: Return ThrottleError
end
sequenceDiagram
participant Client
participant Upload Service
participant Direct Path
participant Multipart Path
Client->>Upload Service: Upload(ctx, UploadParams)
Upload Service->>Upload Service: Check file size vs.<br/>MultipartThreshold
alt Size ≤ Threshold
Upload Service->>Direct Path: uploadDirect()
Direct Path->>Direct Path: File upload
Direct Path->>Direct Path: Fetch FileInfo
Direct Path-->>Upload Service: FileInfo
else Size > Threshold
Upload Service->>Multipart Path: uploadMultipart()
Multipart Path->>Multipart Path: Start session
Multipart Path->>Multipart Path: Upload chunks
Multipart Path->>Multipart Path: Complete & wait
Multipart Path-->>Upload Service: FileInfo
end
Upload Service-->>Client: FileInfo or error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use TruffleHog to scan for secrets in your code with verification capabilities.Add a TruffleHog config file (e.g. trufflehog-config.yml, trufflehog.yml) to your project to customize detectors and scanning behavior. The tool runs only when a config file is present. |
|
@coderabbitai please review |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
ucare/error.go (1)
40-53:ThrottleErrordoesn't embedAPIError, breaking consistency with other error types.Unlike
AuthError,ValidationError, andForbiddenError,ThrottleErrordoesn't embedAPIError. This meanserrors.As(err, &APIError{})won't match throttle errors, which could surprise callers expecting all HTTP errors to be catchable viaAPIError.If this is intentional (e.g., throttle errors are transient and handled differently), consider documenting this distinction. Otherwise, embedding
APIErrorwould provide consistency:♻️ Optional: Embed APIError for consistency
// ThrottleError represents a throttled request (HTTP 429). type ThrottleError struct { + APIError RetryAfter int } func (e ThrottleError) Error() string { if e.RetryAfter == 0 { return "uploadcare: request throttled" } return fmt.Sprintf( "uploadcare: request throttled, retry after %d seconds", e.RetryAfter, ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ucare/error.go` around lines 40 - 53, ThrottleError currently does not embed APIError which prevents errors.As(err, &APIError{}) from matching it; update the ThrottleError definition to embed APIError (e.g., include APIError as an anonymous field in the struct) and keep the RetryAfter int field, then adjust construction/initialization sites so the embedded APIError fields (status/message/etc.) are populated; you can keep or adapt the Error() method to include RetryAfter but ensure the embedded APIError remains present so errors.As recognizes ThrottleError like AuthError/ValidationError/ForbiddenError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ucare/restapi.go`:
- Around line 168-175: The retry logic in the function incorrectly applies the
MaxWaitSeconds ceiling only when the Retry-After header is present and above
zero, allowing the exponential backoff fallback to exceed MaxWaitSeconds. To fix
this in the retry code block, ensure that after computing wait using
expBackoff(tries) when retryAfter is zero or absent, you also apply the
MaxWaitSeconds ceiling to the wait time before proceeding. This change should be
made where the retry wait time is determined, specifically in the conditional
branches handling retryAfter and expBackoff.
In `@ucare/retry.go`:
- Around line 17-19: The expBackoff function currently does an unguarded shift
(1 << (attempt - 1)); add a defensive check to handle non-positive attempt
values by treating attempt <= 0 as 1 (or returning the minimum backoff) before
performing the shift so you never perform a negative shift count; update
expBackoff to normalize attempt when <= 0, then compute wait = 1 << (attempt -
1) and return min(wait, 30) (referencing the expBackoff function and the min
helper).
In `@ucare/uploadapi.go`:
- Around line 158-159: The json decoding call is passing the address of the
caller-provided destination wrapper (using &resdata) which causes the decoder to
overwrite the local interface{} pointer instead of populating the caller's
value; in the upload response handler replace
json.NewDecoder(resp.Body).Decode(&resdata) with
json.NewDecoder(resp.Body).Decode(resdata) so the decoder writes into the
caller's provided destination (resdata) rather than the local wrapper.
In `@upload/upload_test.go`:
- Around line 142-184: The test TestUpload_LargeFile_MultipartUpload declares
and increments completeHit in the HTTP handler for the "/multipart/complete/"
case but never asserts it; add an assertion after the upload to verify
completeHit.Load() equals 1 (use the existing assert package, e.g.
assert.Equal(t, int32(1), completeHit.Load())) so the test confirms the complete
endpoint was invoked.
---
Nitpick comments:
In `@ucare/error.go`:
- Around line 40-53: ThrottleError currently does not embed APIError which
prevents errors.As(err, &APIError{}) from matching it; update the ThrottleError
definition to embed APIError (e.g., include APIError as an anonymous field in
the struct) and keep the RetryAfter int field, then adjust
construction/initialization sites so the embedded APIError fields
(status/message/etc.) are populated; you can keep or adapt the Error() method to
include RetryAfter but ensure the embedded APIError remains present so errors.As
recognizes ThrottleError like AuthError/ValidationError/ForbiddenError.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d1228c9c-82c9-4a21-b611-1048c5bf7165
📒 Files selected for processing (14)
CHANGELOG.mdinternal/config/config.goucare/client.goucare/error.goucare/error_test.goucare/restapi.goucare/restapi_test.goucare/retry.goucare/retry_test.goucare/uploadapi.goucare/uploadapi_test.goupload/service.goupload/upload.goupload/upload_test.go
💤 Files with no reviewable changes (1)
- internal/config/config.go
| if c.retry.MaxWaitSeconds > 0 && | ||
| retryAfter > c.retry.MaxWaitSeconds { | ||
| return false, ThrottleError{RetryAfter: retryAfter} | ||
| } | ||
| wait := retryAfter | ||
| if wait <= 0 { | ||
| wait = expBackoff(tries) | ||
| } |
There was a problem hiding this comment.
Don't bypass MaxWaitSeconds on the fallback backoff path.
When Retry-After is absent or 0, this branch falls back to expBackoff(tries) after the only ceiling check has already run. That means a client configured with MaxWaitSeconds: 1 can still sleep for 2s, 4s, etc. on REST retries, unlike the upload client.
Suggested fix
- if c.retry.MaxWaitSeconds > 0 &&
- retryAfter > c.retry.MaxWaitSeconds {
- return false, ThrottleError{RetryAfter: retryAfter}
- }
wait := retryAfter
if wait <= 0 {
wait = expBackoff(tries)
}
+ if c.retry.MaxWaitSeconds > 0 && wait > c.retry.MaxWaitSeconds {
+ if retryAfter > 0 {
+ return false, ThrottleError{RetryAfter: retryAfter}
+ }
+ wait = c.retry.MaxWaitSeconds
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if c.retry.MaxWaitSeconds > 0 && | |
| retryAfter > c.retry.MaxWaitSeconds { | |
| return false, ThrottleError{RetryAfter: retryAfter} | |
| } | |
| wait := retryAfter | |
| if wait <= 0 { | |
| wait = expBackoff(tries) | |
| } | |
| wait := retryAfter | |
| if wait <= 0 { | |
| wait = expBackoff(tries) | |
| } | |
| if c.retry.MaxWaitSeconds > 0 && wait > c.retry.MaxWaitSeconds { | |
| if retryAfter > 0 { | |
| return false, ThrottleError{RetryAfter: retryAfter} | |
| } | |
| wait = c.retry.MaxWaitSeconds | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ucare/restapi.go` around lines 168 - 175, The retry logic in the function
incorrectly applies the MaxWaitSeconds ceiling only when the Retry-After header
is present and above zero, allowing the exponential backoff fallback to exceed
MaxWaitSeconds. To fix this in the retry code block, ensure that after computing
wait using expBackoff(tries) when retryAfter is zero or absent, you also apply
the MaxWaitSeconds ceiling to the wait time before proceeding. This change
should be made where the retry wait time is determined, specifically in the
conditional branches handling retryAfter and expBackoff.
There was a problem hiding this comment.
@coderabbitai clarified the RetryConfig.MaxWaitSeconds docs to make the intended split explicit: for REST, it only triggers fail-fast when the server sends a positive Retry-After above the cap; when Retry-After is absent/invalid, REST still uses uncapped fallback exponential backoff. Upload behavior remains capped because that API does not return Retry-After.
There was a problem hiding this comment.
@alxgsv, thanks for the clarification! The asymmetric behavior is intentional and makes sense — REST responses can carry a server-authoritative Retry-After that signals how long to wait, so exceeding MaxWaitSeconds on that path is a meaningful fail-fast signal. Without a Retry-After, the client-side exponential backoff is just a best-effort estimate and capping it isn't required. Upload, having no Retry-After, always caps the backoff via MaxWaitSeconds. The code is correct as designed; I'll withdraw my concern.
✏️ Learnings added
Learnt from: alxgsv
Repo: uploadcare/uploadcare-go PR: 27
File: ucare/restapi.go:168-177
Timestamp: 2026-03-14T08:20:30.304Z
Learning: In `ucare/restapi.go` and `ucare/uploadapi.go` (uploadcare-go, Go), `RetryConfig.MaxWaitSeconds` has intentionally asymmetric semantics between the REST and Upload API clients:
- **REST client**: `MaxWaitSeconds` only triggers fail-fast (returns `ThrottleError`) when the server sends a positive `Retry-After` header that exceeds the cap. When `Retry-After` is absent or invalid, the REST client falls back to uncapped exponential backoff (`expBackoff(tries)`) and `MaxWaitSeconds` does NOT apply.
- **Upload client**: `MaxWaitSeconds` always caps the backoff wait, because the upload API does not return a `Retry-After` header.
This split is intentional by design.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
| func expBackoff(attempt int) int { | ||
| wait := 1 << (attempt - 1) | ||
| return min(wait, 30) |
There was a problem hiding this comment.
Consider guarding against non-positive attempt values.
If attempt is 0 or negative, 1 << (attempt - 1) produces undefined behavior (negative shift count). While this is an unexported function and callers likely pass 1-indexed attempts, adding a guard would prevent subtle bugs:
🛡️ Proposed defensive fix
func expBackoff(attempt int) int {
+ if attempt < 1 {
+ return 1
+ }
wait := 1 << (attempt - 1)
return min(wait, 30)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func expBackoff(attempt int) int { | |
| wait := 1 << (attempt - 1) | |
| return min(wait, 30) | |
| func expBackoff(attempt int) int { | |
| if attempt < 1 { | |
| return 1 | |
| } | |
| wait := 1 << (attempt - 1) | |
| return min(wait, 30) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ucare/retry.go` around lines 17 - 19, The expBackoff function currently does
an unguarded shift (1 << (attempt - 1)); add a defensive check to handle
non-positive attempt values by treating attempt <= 0 as 1 (or returning the
minimum backoff) before performing the shift so you never perform a negative
shift count; update expBackoff to normalize attempt when <= 0, then compute wait
= 1 << (attempt - 1) and return min(wait, 30) (referencing the expBackoff
function and the min helper).
| if err := json.NewDecoder(resp.Body).Decode(&resdata); err != nil { | ||
| return false, err |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Compare the decode target in the upload and REST clients.
rg -n 'Decode\(&resdata\)|Decode\(resdata\)' ucare/uploadapi.go ucare/restapi.goRepository: uploadcare/uploadcare-go
Length of output: 248
Decode into resdata, not &resdata.
resdata already holds the caller's destination. Taking its address only lets the decoder overwrite the local interface{} wrapper, so successful upload responses can return nil while leaving the caller's result unchanged.
Suggested fix
- if err := json.NewDecoder(resp.Body).Decode(&resdata); err != nil {
+ if err := json.NewDecoder(resp.Body).Decode(resdata); err != nil {
return false, err
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if err := json.NewDecoder(resp.Body).Decode(&resdata); err != nil { | |
| return false, err | |
| if err := json.NewDecoder(resp.Body).Decode(resdata); err != nil { | |
| return false, err |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ucare/uploadapi.go` around lines 158 - 159, The json decoding call is passing
the address of the caller-provided destination wrapper (using &resdata) which
causes the decoder to overwrite the local interface{} pointer instead of
populating the caller's value; in the upload response handler replace
json.NewDecoder(resp.Body).Decode(&resdata) with
json.NewDecoder(resp.Body).Decode(resdata) so the decoder writes into the
caller's provided destination (resdata) rather than the local wrapper.
| func TestUpload_LargeFile_MultipartUpload(t *testing.T) { | ||
| t.Parallel() | ||
|
|
||
| var directHit, multipartHit, completeHit atomic.Int32 | ||
| fileID := "test-uuid-456" | ||
|
|
||
| srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | ||
| w.Header().Set("Content-Type", "application/json") | ||
| switch r.URL.Path { | ||
| case "/base/": | ||
| directHit.Add(1) | ||
| json.NewEncoder(w).Encode(map[string]string{"file": fileID}) | ||
| case "/multipart/start/": | ||
| multipartHit.Add(1) | ||
| json.NewEncoder(w).Encode(map[string]interface{}{ | ||
| "uuid": fileID, | ||
| "parts": []string{}, | ||
| }) | ||
| case "/multipart/complete/": | ||
| completeHit.Add(1) | ||
| json.NewEncoder(w).Encode(FileInfo{FileName: "large.bin"}) | ||
| default: | ||
| w.WriteHeader(http.StatusNotFound) | ||
| } | ||
| })) | ||
| defer srv.Close() | ||
|
|
||
| client := &mockUploadClient{baseURL: srv.URL, conn: srv.Client()} | ||
| svc := NewService(client) | ||
|
|
||
| data := bytes.NewReader(make([]byte, 20*1024*1024)) // 20MB | ||
| info, err := svc.Upload(context.Background(), UploadParams{ | ||
| Data: data, | ||
| Name: "large.bin", | ||
| ContentType: "application/octet-stream", | ||
| Size: 20 * 1024 * 1024, | ||
| }) | ||
|
|
||
| assert.NoError(t, err) | ||
| assert.Equal(t, "large.bin", info.FileName) | ||
| assert.Equal(t, int32(0), directHit.Load()) | ||
| assert.Equal(t, int32(1), multipartHit.Load()) | ||
| } |
There was a problem hiding this comment.
Missing assertion for completeHit counter.
The test declares completeHit (line 145) and increments it in the handler (line 161), but never asserts its value. For completeness, verify that /multipart/complete/ was called:
💚 Add assertion for completeHit
assert.Equal(t, int32(0), directHit.Load())
assert.Equal(t, int32(1), multipartHit.Load())
+ assert.Equal(t, int32(1), completeHit.Load())
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func TestUpload_LargeFile_MultipartUpload(t *testing.T) { | |
| t.Parallel() | |
| var directHit, multipartHit, completeHit atomic.Int32 | |
| fileID := "test-uuid-456" | |
| srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | |
| w.Header().Set("Content-Type", "application/json") | |
| switch r.URL.Path { | |
| case "/base/": | |
| directHit.Add(1) | |
| json.NewEncoder(w).Encode(map[string]string{"file": fileID}) | |
| case "/multipart/start/": | |
| multipartHit.Add(1) | |
| json.NewEncoder(w).Encode(map[string]interface{}{ | |
| "uuid": fileID, | |
| "parts": []string{}, | |
| }) | |
| case "/multipart/complete/": | |
| completeHit.Add(1) | |
| json.NewEncoder(w).Encode(FileInfo{FileName: "large.bin"}) | |
| default: | |
| w.WriteHeader(http.StatusNotFound) | |
| } | |
| })) | |
| defer srv.Close() | |
| client := &mockUploadClient{baseURL: srv.URL, conn: srv.Client()} | |
| svc := NewService(client) | |
| data := bytes.NewReader(make([]byte, 20*1024*1024)) // 20MB | |
| info, err := svc.Upload(context.Background(), UploadParams{ | |
| Data: data, | |
| Name: "large.bin", | |
| ContentType: "application/octet-stream", | |
| Size: 20 * 1024 * 1024, | |
| }) | |
| assert.NoError(t, err) | |
| assert.Equal(t, "large.bin", info.FileName) | |
| assert.Equal(t, int32(0), directHit.Load()) | |
| assert.Equal(t, int32(1), multipartHit.Load()) | |
| } | |
| func TestUpload_LargeFile_MultipartUpload(t *testing.T) { | |
| t.Parallel() | |
| var directHit, multipartHit, completeHit atomic.Int32 | |
| fileID := "test-uuid-456" | |
| srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | |
| w.Header().Set("Content-Type", "application/json") | |
| switch r.URL.Path { | |
| case "/base/": | |
| directHit.Add(1) | |
| json.NewEncoder(w).Encode(map[string]string{"file": fileID}) | |
| case "/multipart/start/": | |
| multipartHit.Add(1) | |
| json.NewEncoder(w).Encode(map[string]interface{}{ | |
| "uuid": fileID, | |
| "parts": []string{}, | |
| }) | |
| case "/multipart/complete/": | |
| completeHit.Add(1) | |
| json.NewEncoder(w).Encode(FileInfo{FileName: "large.bin"}) | |
| default: | |
| w.WriteHeader(http.StatusNotFound) | |
| } | |
| })) | |
| defer srv.Close() | |
| client := &mockUploadClient{baseURL: srv.URL, conn: srv.Client()} | |
| svc := NewService(client) | |
| data := bytes.NewReader(make([]byte, 20*1024*1024)) // 20MB | |
| info, err := svc.Upload(context.Background(), UploadParams{ | |
| Data: data, | |
| Name: "large.bin", | |
| ContentType: "application/octet-stream", | |
| Size: 20 * 1024 * 1024, | |
| }) | |
| assert.NoError(t, err) | |
| assert.Equal(t, "large.bin", info.FileName) | |
| assert.Equal(t, int32(0), directHit.Load()) | |
| assert.Equal(t, int32(1), multipartHit.Load()) | |
| assert.Equal(t, int32(1), completeHit.Load()) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@upload/upload_test.go` around lines 142 - 184, The test
TestUpload_LargeFile_MultipartUpload declares and increments completeHit in the
HTTP handler for the "/multipart/complete/" case but never asserts it; add an
assertion after the upload to verify completeHit.Load() equals 1 (use the
existing assert package, e.g. assert.Equal(t, int32(1), completeHit.Load())) so
the test confirms the complete endpoint was invoked.
ucare.Config.Retryupload.Service.Upload()with automatic direct-vs-multipart selection and changelog updatesSummary by CodeRabbit
Release Notes
New Features
Bug Fixes
Tests