Conversation
…updated, and file.infected
Previously, any status not explicitly handled (e.g. 409, 403, 500, 502)
fell through to the JSON decode path, silently producing zero-value
results. Now all >= 400 statuses return an error.
Also fix Decode(&resdata) -> Decode(resdata) to avoid unnecessary
*interface{} indirection.
Extract handleResponse method with defer resp.Body.Close() to guarantee the response body is closed on all paths including nil resdata, error decoding, and retry. Replace goto with for loop.
Derive CDN base URL from the public key (SHA256 → base36 → first 10 chars). Config.CDNBase can be set explicitly to override the default. Shallow-copy Config in resolveConfig to avoid mutating a shared template when creating clients for multiple projects.
WalkthroughThis pull request introduces new addon and metadata services for file management, breaks file.Info() API to accept optional parameters, changes webhook deletion to use ID instead of URL, adds structured error types with retry configuration, extends upload API with metadata support and automatic multipart selection, and provides CDN utilities for per-project domain derivation. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client Code
participant AddonSvc as Addon Service
participant API as REST API
participant Queue as Job Queue
Client->>AddonSvc: Execute(ctx, "clamav", params)
activate AddonSvc
AddonSvc->>API: POST /addons/clamav/execute/
activate API
API-->>AddonSvc: {requestID: "abc123"}
deactivate API
AddonSvc-->>Client: ExecuteResult{RequestID}
deactivate AddonSvc
Note over Client: Poll for status
loop Until Done/Error
Client->>AddonSvc: Status(ctx, "clamav", "abc123")
activate AddonSvc
AddonSvc->>API: GET /addons/clamav/execute/status/?RequestID=abc123
activate API
API-->>AddonSvc: {status: "in_progress|done|error", result: {...}}
deactivate API
AddonSvc-->>Client: StatusResult
deactivate AddonSvc
Client->>Client: Wait + Retry
end
sequenceDiagram
participant Client as Client Code
participant UploadSvc as Upload Service
participant Threshold as Threshold Check
participant DirectAPI as Direct Upload API
participant MultipartAPI as Multipart API
Client->>UploadSvc: Upload(ctx, UploadParams{Size, Metadata, ...})
activate UploadSvc
UploadSvc->>Threshold: Size >= threshold?
activate Threshold
alt Size < Threshold
Threshold-->>UploadSvc: Use Direct
UploadSvc->>DirectAPI: POST /base/ with file + metadata[key]=value
activate DirectAPI
DirectAPI-->>UploadSvc: FileInfo
deactivate DirectAPI
else Size >= Threshold
Threshold-->>UploadSvc: Use Multipart
UploadSvc->>MultipartAPI: POST /multipart/start/ with metadata
activate MultipartAPI
MultipartAPI-->>UploadSvc: {upload_id}
deactivate MultipartAPI
UploadSvc->>MultipartAPI: Upload chunks
UploadSvc->>MultipartAPI: POST /multipart/complete/
MultipartAPI-->>UploadSvc: FileInfo
end
deactivate Threshold
UploadSvc-->>Client: FileInfo
deactivate UploadSvc
sequenceDiagram
participant Client as Client Code
participant MetadataSvc as Metadata Service
participant API as REST API
participant Validator as Key Validator
Client->>MetadataSvc: Set(ctx, fileUUID, "key", "value")
activate MetadataSvc
MetadataSvc->>Validator: validateKey("key")
activate Validator
Validator-->>MetadataSvc: Valid
deactivate Validator
MetadataSvc->>API: PUT /files/uuid/metadata/key with JSON body
activate API
API-->>MetadataSvc: "value"
deactivate API
MetadataSvc-->>Client: string result
deactivate MetadataSvc
Client->>MetadataSvc: Get(ctx, fileUUID, "key")
activate MetadataSvc
MetadataSvc->>Validator: validateKey("key")
Validator-->>MetadataSvc: Valid
MetadataSvc->>API: GET /files/uuid/metadata/key
activate API
API-->>MetadataSvc: "value"
deactivate API
MetadataSvc-->>Client: "value"
deactivate MetadataSvc
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
upload/service.go (1)
27-35:⚠️ Potential issue | 🟠 MajorAdding
Uploadto the exportedServiceinterface is a breaking change.In Go, any downstream package that implements or mocks
upload.Servicewill fail to compile—their implementations no longer satisfy the interface. To maintain compatibility, consider:
- Introduce a separate interface (e.g.,
UploadService) with the new method and embedServicein it- Use optional interface pattern: define an interface for the new capability and probe for it at runtime (e.g.,
if uploader, ok := service.(interface{Upload(...)}); ok { ... })- Seal the interface with an unexported method if you only expect your own implementations
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upload/service.go` around lines 27 - 35, You added Upload to the exported Service interface which is a breaking change; to fix, remove Upload from Service and either (a) create a new exported interface UploadService that declares Upload and have concrete implementations implement both Service and UploadService where needed, or (b) keep Service unchanged and implement an optional uploader interface (e.g., interface{ Upload(context.Context, UploadParams) (FileInfo, error) }) that callers probe at runtime with a type assertion; update call sites to use the new UploadService where compile-time guarantee is required or to perform the runtime check against the optional uploader interface.
🧹 Nitpick comments (6)
internal/codec/codec_test.go (1)
272-289: Mirror the nil/empty-map cases on the query path.Form-data now pins nil and empty metadata behavior, but query encoding only tests a populated map. Adding the matching query cases would catch regressions that start emitting
metadata=or straymetadata[...]keys.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/codec/codec_test.go` around lines 272 - 289, Extend TestEncodeReqQuery_WithMetadata to mirror the nil and empty-map form-data cases: add subtests (or additional test cases) that call codec.EncodeReqQuery with Metadata set to nil and to an empty map, then assert the request query contains no "metadata" key and no "metadata=" or "metadata[...]" entries; reference the existing TestEncodeReqQuery_WithMetadata and codec.EncodeReqQuery to locate where to add these checks and use req.URL.Query() to validate absence of metadata parameters.ucare/cdn_test.go (1)
6-6: Same alias inconsistency aswebhook/webhook_test.go.Consider using
requiredirectly for consistency and clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ucare/cdn_test.go` at line 6, The test import currently aliases the require package as "assert"; change the import to use the canonical name "require" (i.e., import "github.com/stretchr/testify/require" as require) and update all usages in this file (e.g., any assert.* calls) to require.* to match the webhook/webhook_test.go style and keep aliasing consistent across tests.webhook/webhook_test.go (1)
6-6: Consider clarifying the import alias.Aliasing
requireasassertmay confuse readers since these packages have different behaviors (requirestops execution on failure,assertdoes not). Either use the default name or alias it asrequire.♻️ Suggested change
- assert "github.com/stretchr/testify/require" + "github.com/stretchr/testify/require"Then update usages to
require.Equal(...).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@webhook/webhook_test.go` at line 6, The import currently aliases the require package as assert (assert "github.com/stretchr/testify/require"); change the import to use the correct alias/name (require "github.com/stretchr/testify/require" or just github.com/stretchr/testify/require) and then update all test calls that use assert.* to require.* (e.g., require.Equal, require.NoError, etc.) in webhook_test.go so the semantics match the require package that aborts on failure.test/metadata.go (1)
11-40: Consider usingassert.NoErrorfor error assertions.Using
assert.NoError(t, err)instead ofassert.Equal(t, nil, err)is more idiomatic in Go tests and provides better error messages when assertions fail.♻️ Suggested improvement
func metadataSet(t *testing.T, r *testenv.Runner) { ctx := context.Background() val, err := r.Metadata.Set(ctx, r.Artifacts.Files[0].ID, "test_key", "test_value") - assert.Equal(t, nil, err) + assert.NoError(t, err) assert.Equal(t, "test_value", val) } func metadataGet(t *testing.T, r *testenv.Runner) { ctx := context.Background() val, err := r.Metadata.Get(ctx, r.Artifacts.Files[0].ID, "test_key") - assert.Equal(t, nil, err) + assert.NoError(t, err) assert.Equal(t, "test_value", val) } func metadataList(t *testing.T, r *testenv.Runner) { ctx := context.Background() data, err := r.Metadata.List(ctx, r.Artifacts.Files[0].ID) - assert.Equal(t, nil, err) + assert.NoError(t, err) assert.Equal(t, "test_value", data["test_key"]) } func metadataDelete(t *testing.T, r *testenv.Runner) { ctx := context.Background() err := r.Metadata.Delete(ctx, r.Artifacts.Files[0].ID, "test_key") - assert.Equal(t, nil, err) + assert.NoError(t, err) // verify it's gone _, err = r.Metadata.Get(ctx, r.Artifacts.Files[0].ID, "test_key") assert.NotNil(t, err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/metadata.go` around lines 11 - 40, Replace the nil-equality error checks with assert.NoError for all test helper functions: in metadataSet, metadataGet, metadataList, and metadataDelete, change the assertions that currently use assert.Equal(t, nil, err) to assert.NoError(t, err) when calling r.Metadata.Set, r.Metadata.Get, r.Metadata.List and r.Metadata.Delete; keep the existing value assertions (e.g., assert.Equal for "test_value") and the final assert.NotNil in metadataDelete that verifies the key was removed.webhook/service_test.go (1)
18-65: Consider extracting shared test client to reduce duplication.The
testClientimplementation is nearly identical to the one inaddon/service_test.go. Consider extracting it to a shared internal test helper package (e.g.,internal/testutil) if more service tests follow this pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@webhook/service_test.go` around lines 18 - 65, The testClient, its methods NewRequest and Do, plus newTestService and usage of NewService are duplicated across tests; extract them into a shared internal test helper package (e.g., internal/testutil) and import it in both webhook and addon tests: move the testClient type, NewRequest, Do, and newTestService helpers into that package, adjust their signatures if needed to avoid package-specific types, update tests to call testutil.NewTestService (or equivalent) and testutil.TestClient to reduce duplication and centralize shared test logic.upload/upload_test.go (1)
156-161: The multipart tests still pass if no chunk is ever uploaded.Each
/multipart/start/fixture returns an emptypartslist, so these cases only prove that the code hitsstart/complete. A regression that skips the actual part upload would still satisfy the current assertions. Return at least one fake part URL and assert that it is requested in the multipart paths.Also applies to: 203-206, 288-291, 367-370
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upload/upload_test.go` around lines 156 - 161, The multipart test fixtures return an empty "parts" slice which lets tests pass without any chunk upload; modify the multipart/start handler(s) in upload_test.go (the case matching "/multipart/start/") to return at least one fake part URL (e.g., construct a part URL using the test server's base URL and a predictable path) instead of an empty []string, and then update the tests to assert that this fake part URL is requested by checking the recorded multipart request paths (the variable/structure collecting multipart requests such as multipartHit or the request path recorder) to ensure the part upload was actually invoked; apply the same change to the other multipart/start fixtures referenced in the comment so each returns a non-empty parts list and each test verifies the corresponding part upload path was hit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@conversion/path.go`:
- Around line 26-29: Dynamic URL segments are being concatenated raw (e.g.
fmt.Fprintf(&b, "%s/document/-/format/%s/", opts.UUID, format) and the opts.Page
segment), which can break paths; import net/url and wrap dynamic parts with
url.PathEscape (e.g. url.PathEscape(opts.UUID), url.PathEscape(format)) before
writing them into the buffer, and apply the same escaping to all other dynamic
segments in the other block (lines handling opts.Page and the 58-73 section) so
reserved characters are percent-encoded.
In `@file/file.go`:
- Around line 27-35: The new third parameter on service.Info is a breaking
change; restore the original signature Info(ctx context.Context, fileID string)
(Info, error) and add a new method InfoWithParams(ctx context.Context, fileID
string, params *InfoParams) (Info, error) that contains the logic that accepts
params (or delegates to a shared internal helper), and update any public service
interface to include InfoWithParams while keeping Info unchanged so existing
callers, method expressions, and implementations keep compiling.
In `@internal/codec/codec.go`:
- Around line 129-132: In EncodeReqQuery, remove the panic triggered when
f.Interface() does not assert to map[string]string (the wrongMapType panic) and
instead return a descriptive error from EncodeReqQuery; locate the branch around
the m, ok := f.Interface().(map[string]string) check and on !ok return
fmt.Errorf or a wrapped error that includes the actual type (use
reflect.TypeOf(f.Interface())) and mention unsupported map type so the caller
can handle the failure gracefully.
In `@ucare/error.go`:
- Around line 33-67: Add Unwrap methods so callers can use errors.As to access
the embedded APIError: embed APIError into ThrottleError (change ThrottleError
to include APIError) and implement Unwrap on AuthError, ValidationError,
ForbiddenError and ThrottleError (use the same receiver style as their Error
methods) to return the embedded APIError as an error; this exposes
StatusCode/Detail for errors.As and preserves the existing Error() behavior.
In `@ucare/restapi.go`:
- Around line 135-155: The error-handling branches that decode JSON into
APIError/AuthError/ForbiddenError (the switch on resp.StatusCode handling
400/404, 401, 403) currently return the JSON decode error directly, which loses
the structured error when the body is empty or plain text; change each branch to
attempt json.Decode into the respective struct (APIError, AuthError,
ForbiddenError), and if decode fails, read resp.Body once (io.ReadAll), trim the
text, set that trimmed string into the struct's Message/Detail field (or
appropriate field), ensure StatusCode is set (resp.StatusCode or 401/403), and
return the populated struct; apply the same fallback behavior used in the
generic >=400 path (lines ~185-191) so callers always receive a structured error
object even when JSON decoding fails.
- Around line 195-218: The helper is incorrectly treating typed nils as "no
destination", causing responses to be discarded; update isNilResponseData to
return true only when resdata == nil (remove the reflect-based Kind/IsNil checks
and the reflect import) so that typed nil pointers/slices/maps still allow
decoding to occur (adjust any tests if needed); ensure the Do flow that calls
isNilResponseData (the caller around json.NewDecoder(resp.Body).Decode(resdata))
will attempt Decode when resdata is a typed nil.
- Around line 100-121: In restAPIClient.Do, the loop can attempt retries even
when the request has a Body but no GetBody, causing replay of a closed/consumed
body; modify the retry logic in Do (and its interaction with handleResponse) so
that if handleResponse requests a retry but req.GetBody is nil (and req.Body !=
nil), Do returns an error immediately instead of looping; also ensure the
existing reset code that calls req.GetBody to reassign req.Body only runs when
req.GetBody != nil and that the error returned explains the lack of GetBody
support for retrying the request.
In `@upload/upload.go`:
- Around line 58-69: The current uploadDirect calls s.File(...) then immediately
calls s.FileInfo(...) and if the lookup fails the upload is reported as an
error, which can cause duplicate uploads; change uploadDirect to preserve and
return the successful upload id when s.File succeeds instead of hiding it behind
the follow-up lookup: after s.File(...) returns id, call s.FileInfo(...) only as
an optional best-effort (e.g., if it succeeds return that FileInfo), but if
s.FileInfo fails return FileInfo with at least the ID set (e.g., FileInfo{ID:
id}) and nil error (or return the id alongside a wrapped error that includes the
id) so callers know the upload succeeded; update function uploadDirect and any
callers expecting full metadata accordingly (references: uploadDirect, s.File,
s.FileInfo, UploadParams, FileInfo).
- Around line 37-41: The Upload method currently lets params.Size < 0 fall
through to direct upload; validate and reject negative Size at the start of
Upload (before computing threshold) by returning an error (use the errors
package) when params.Size is negative. Update the Upload function to check
params.Size < 0 and return a clear error (e.g., "invalid size: must be
non-negative") so multipart selection logic that uses MultipartThreshold and
DefaultMultipartThreshold is not bypassed; reference the Upload method and
UploadParams.Size/MultipartThreshold symbols when adding the check.
---
Outside diff comments:
In `@upload/service.go`:
- Around line 27-35: You added Upload to the exported Service interface which is
a breaking change; to fix, remove Upload from Service and either (a) create a
new exported interface UploadService that declares Upload and have concrete
implementations implement both Service and UploadService where needed, or (b)
keep Service unchanged and implement an optional uploader interface (e.g.,
interface{ Upload(context.Context, UploadParams) (FileInfo, error) }) that
callers probe at runtime with a type assertion; update call sites to use the new
UploadService where compile-time guarantee is required or to perform the runtime
check against the optional uploader interface.
---
Nitpick comments:
In `@internal/codec/codec_test.go`:
- Around line 272-289: Extend TestEncodeReqQuery_WithMetadata to mirror the nil
and empty-map form-data cases: add subtests (or additional test cases) that call
codec.EncodeReqQuery with Metadata set to nil and to an empty map, then assert
the request query contains no "metadata" key and no "metadata=" or
"metadata[...]" entries; reference the existing TestEncodeReqQuery_WithMetadata
and codec.EncodeReqQuery to locate where to add these checks and use
req.URL.Query() to validate absence of metadata parameters.
In `@test/metadata.go`:
- Around line 11-40: Replace the nil-equality error checks with assert.NoError
for all test helper functions: in metadataSet, metadataGet, metadataList, and
metadataDelete, change the assertions that currently use assert.Equal(t, nil,
err) to assert.NoError(t, err) when calling r.Metadata.Set, r.Metadata.Get,
r.Metadata.List and r.Metadata.Delete; keep the existing value assertions (e.g.,
assert.Equal for "test_value") and the final assert.NotNil in metadataDelete
that verifies the key was removed.
In `@ucare/cdn_test.go`:
- Line 6: The test import currently aliases the require package as "assert";
change the import to use the canonical name "require" (i.e., import
"github.com/stretchr/testify/require" as require) and update all usages in this
file (e.g., any assert.* calls) to require.* to match the
webhook/webhook_test.go style and keep aliasing consistent across tests.
In `@upload/upload_test.go`:
- Around line 156-161: The multipart test fixtures return an empty "parts" slice
which lets tests pass without any chunk upload; modify the multipart/start
handler(s) in upload_test.go (the case matching "/multipart/start/") to return
at least one fake part URL (e.g., construct a part URL using the test server's
base URL and a predictable path) instead of an empty []string, and then update
the tests to assert that this fake part URL is requested by checking the
recorded multipart request paths (the variable/structure collecting multipart
requests such as multipartHit or the request path recorder) to ensure the part
upload was actually invoked; apply the same change to the other multipart/start
fixtures referenced in the comment so each returns a non-empty parts list and
each test verifies the corresponding part upload path was hit.
In `@webhook/service_test.go`:
- Around line 18-65: The testClient, its methods NewRequest and Do, plus
newTestService and usage of NewService are duplicated across tests; extract them
into a shared internal test helper package (e.g., internal/testutil) and import
it in both webhook and addon tests: move the testClient type, NewRequest, Do,
and newTestService helpers into that package, adjust their signatures if needed
to avoid package-specific types, update tests to call testutil.NewTestService
(or equivalent) and testutil.TestClient to reduce duplication and centralize
shared test logic.
In `@webhook/webhook_test.go`:
- Line 6: The import currently aliases the require package as assert (assert
"github.com/stretchr/testify/require"); change the import to use the correct
alias/name (require "github.com/stretchr/testify/require" or just
github.com/stretchr/testify/require) and then update all test calls that use
assert.* to require.* (e.g., require.Equal, require.NoError, etc.) in
webhook_test.go so the semantics match the require package that aborts on
failure.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a969f19f-6dcc-43e5-8e44-4335a806e5f5
📒 Files selected for processing (54)
CHANGELOG.mdREADME.mdaddon/addon.goaddon/log.goaddon/service.goaddon/service_test.goconversion/conversion.goconversion/conversion_test.goconversion/path.goconversion/path_test.gofile/file.gofile/list.gofile/service.gofile/service_test.gogroup/group.gogroup/group_test.gogroup/service.gointernal/codec/codec.gointernal/codec/codec_test.gointernal/config/config.gometadata/log.gometadata/metadata.gometadata/service.gometadata/service_test.gometadata/validation.gotest/addon.gotest/file.gotest/group.gotest/integration_test.gotest/metadata.gotest/testenv/runner.gotest/webhook.goucare/cdn.goucare/cdn_test.goucare/client.goucare/doc.goucare/error.goucare/error_test.goucare/restapi.goucare/restapi_test.goucare/retry.goucare/retry_test.goucare/uploadapi.goucare/uploadapi_test.goupload/direct.goupload/fromurl.goupload/multipart.goupload/service.goupload/upload.goupload/upload_test.gowebhook/service.gowebhook/service_test.gowebhook/webhook.gowebhook/webhook_test.go
💤 Files with no reviewable changes (1)
- internal/config/config.go
| fmt.Fprintf(&b, "%s/document/-/format/%s/", opts.UUID, format) | ||
|
|
||
| if opts.Page > 0 { | ||
| fmt.Fprintf(&b, "-/page/%d/", opts.Page) |
There was a problem hiding this comment.
Escape dynamic URL path segments before concatenation.
These builders currently inject raw values into URL paths. Reserved characters in any option can break the path or change semantics.
Proposed fix
import (
"fmt"
+ "net/url"
"strings"
)
+
+func escPathSegment(v string) string { return url.PathEscape(v) }
@@
- fmt.Fprintf(&b, "%s/document/-/format/%s/", opts.UUID, format)
+ fmt.Fprintf(&b, "%s/document/-/format/%s/", escPathSegment(opts.UUID), escPathSegment(format))
@@
- fmt.Fprintf(&b, "%s/video/-/format/%s/", opts.UUID, opts.Format)
+ fmt.Fprintf(&b, "%s/video/-/format/%s/", escPathSegment(opts.UUID), escPathSegment(opts.Format))
@@
- fmt.Fprintf(&b, "-/size/%s/", opts.Size)
+ fmt.Fprintf(&b, "-/size/%s/", escPathSegment(opts.Size))
if opts.ResizeMode != "" {
- fmt.Fprintf(&b, "%s/", opts.ResizeMode)
+ fmt.Fprintf(&b, "%s/", escPathSegment(opts.ResizeMode))
}
@@
- fmt.Fprintf(&b, "-/quality/%s/", opts.Quality)
+ fmt.Fprintf(&b, "-/quality/%s/", escPathSegment(opts.Quality))
@@
- fmt.Fprintf(&b, "-/cut/%s/%s/", opts.CutStart, opts.CutLength)
+ fmt.Fprintf(&b, "-/cut/%s/%s/", escPathSegment(opts.CutStart), escPathSegment(opts.CutLength))Also applies to: 58-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@conversion/path.go` around lines 26 - 29, Dynamic URL segments are being
concatenated raw (e.g. fmt.Fprintf(&b, "%s/document/-/format/%s/", opts.UUID,
format) and the opts.Page segment), which can break paths; import net/url and
wrap dynamic parts with url.PathEscape (e.g. url.PathEscape(opts.UUID),
url.PathEscape(format)) before writing them into the buffer, and apply the same
escaping to all other dynamic segments in the other block (lines handling
opts.Page and the 58-73 section) so reserved characters are percent-encoded.
| func (s service) Info( | ||
| ctx context.Context, | ||
| fileID string, | ||
| params *InfoParams, | ||
| ) (data Info, err error) { | ||
| var reqParams ucare.ReqEncoder | ||
| if params != nil { | ||
| reqParams = params | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Go, is changing an exported method from Info(ctx, fileID)toInfo(ctx, fileID, params *InfoParams)source-compatible? Is a variadic parameter such asparams ...*InfoParams a common way to evolve the API without breaking existing call sites?
💡 Result:
No.
- Changing
Info(ctx, fileID)toInfo(ctx, fileID, params *InfoParams)is not source-compatible: all existing call sites now have “not enough arguments” and won’t compile (it’s a signature change, which is inherently breaking). (go.dev) - Changing it to
Info(ctx, fileID, params ...*InfoParams)is call-compatible with existing call sites (variadic functions “may be invoked with zero or more arguments” for the...parameter). (go.dev)
However, the variadic trick is still not fully source-/backward-compatible in Go, because it changes the function/method type. That can break code that doesn’t call it directly, e.g. assigning the method/function to a variable of the old type, using method expressions, or satisfying an interface with the old method signature. The Go team explicitly warns that “call compatibility is not enough for backward compatibility” and that there is “no backward-compatible change you can make to a function’s signature.” (go.dev)
Common evolution patterns instead:
- Add a new method (often with a suffix) and keep the old one (like
Query+QueryContext). (go.dev) - If you anticipate future knobs, accept an options/config struct (possibly
nil) from the start, so new fields can be added compatibly. (go.dev)
Citations:
- 1: https://go.dev/blog/module-compatibility
- 2: https://go.dev/ref/spec
- 3: https://go.dev/blog/module-compatibility
- 4: https://go.dev/blog/module-compatibility
- 5: https://go.dev/blog/module-compatibility
This parameter addition is a compile-time breaking change.
Existing Info(ctx, fileID) callers won't compile until updated with a third argument. While a variadic params ...*InfoParams restores call compatibility for direct callers, it changes the method type signature, which still breaks code that assigns the method to a variable of the old type, uses method expressions, or satisfies an interface with the original signature.
Go's guidance on API evolution recommends either: (1) add a new method (e.g., InfoWithParams) and keep the old one, or (2) if you anticipated future options, design the method to accept an options/config struct from the start (possibly nil). This avoids signature changes altogether.
If using a variadic parameter
-func (s service) Info(
- ctx context.Context,
- fileID string,
- params *InfoParams,
-) (data Info, err error) {
+func (s service) Info(
+ ctx context.Context,
+ fileID string,
+ params ...*InfoParams,
+) (data Info, err error) {
var reqParams ucare.ReqEncoder
- if params != nil {
- reqParams = params
+ if len(params) > 0 && params[0] != nil {
+ reqParams = params[0]
}Apply the same signature change to any public service interface this method implements.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (s service) Info( | |
| ctx context.Context, | |
| fileID string, | |
| params *InfoParams, | |
| ) (data Info, err error) { | |
| var reqParams ucare.ReqEncoder | |
| if params != nil { | |
| reqParams = params | |
| } | |
| func (s service) Info( | |
| ctx context.Context, | |
| fileID string, | |
| params ...*InfoParams, | |
| ) (data Info, err error) { | |
| var reqParams ucare.ReqEncoder | |
| if len(params) > 0 && params[0] != nil { | |
| reqParams = params[0] | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@file/file.go` around lines 27 - 35, The new third parameter on service.Info
is a breaking change; restore the original signature Info(ctx context.Context,
fileID string) (Info, error) and add a new method InfoWithParams(ctx
context.Context, fileID string, params *InfoParams) (Info, error) that contains
the logic that accepts params (or delegates to a shared internal helper), and
update any public service interface to include InfoWithParams while keeping Info
unchanged so existing callers, method expressions, and implementations keep
compiling.
| m, ok := f.Interface().(map[string]string) | ||
| if !ok { | ||
| panic(wrongMapType) | ||
| } |
There was a problem hiding this comment.
Avoid panic in query encoding for unsupported map types.
This introduces a hard crash path in SDK request building. Since EncodeReqQuery returns error, this should fail gracefully instead of panicking.
Proposed fix
- m, ok := f.Interface().(map[string]string)
- if !ok {
- panic(wrongMapType)
- }
+ m, ok := f.Interface().(map[string]string)
+ if !ok {
+ return TypeErr{
+ Data: tf.Name,
+ Type: "map[string]string",
+ }
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/codec/codec.go` around lines 129 - 132, In EncodeReqQuery, remove
the panic triggered when f.Interface() does not assert to map[string]string (the
wrongMapType panic) and instead return a descriptive error from EncodeReqQuery;
locate the branch around the m, ok := f.Interface().(map[string]string) check
and on !ok return fmt.Errorf or a wrapped error that includes the actual type
(use reflect.TypeOf(f.Interface())) and mention unsupported map type so the
caller can handle the failure gracefully.
| // AuthError represents an authentication failure (HTTP 401). | ||
| type AuthError struct{ APIError } | ||
|
|
||
| type throttleErr struct { | ||
| func (e AuthError) Error() string { | ||
| return fmt.Sprintf("uploadcare: authentication failed: %s", e.Detail) | ||
| } | ||
|
|
||
| // ThrottleError represents a throttled request (HTTP 429). | ||
| type ThrottleError struct { | ||
| RetryAfter int | ||
| } | ||
|
|
||
| func (e throttleErr) Error() string { | ||
| func (e ThrottleError) Error() string { | ||
| if e.RetryAfter == 0 { | ||
| return "Request was throttled." | ||
| return "uploadcare: request throttled" | ||
| } | ||
| return fmt.Sprintf( | ||
| "Request was throttled. Expected available in %d second", | ||
| "uploadcare: request throttled, retry after %d seconds", | ||
| e.RetryAfter, | ||
| ) | ||
| } | ||
|
|
||
| type reqValidationErr struct{ respErr } | ||
| // ValidationError represents a request validation failure (HTTP 400). | ||
| type ValidationError struct{ APIError } | ||
|
|
||
| func (e reqValidationErr) Error() string { | ||
| return fmt.Sprintf("Request parameters validation error: %s", e.Details) | ||
| func (e ValidationError) Error() string { | ||
| return fmt.Sprintf("uploadcare: validation error: %s", e.Detail) | ||
| } | ||
|
|
||
| type reqForbiddenErr struct{ respErr } | ||
| // ForbiddenError represents a forbidden request (HTTP 403). | ||
| type ForbiddenError struct{ APIError } | ||
|
|
||
| func (e ForbiddenError) Error() string { | ||
| return fmt.Sprintf("uploadcare: forbidden: %s", e.Detail) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Go, does errors.Asmatch an embedded error field without anUnwrapmethod? For a wrapper liketype AuthError struct{ APIError }, what is the idiomatic way to preserve errors.As(err, &apiErr) behavior?
💡 Result:
No. errors.As only walks the error tree formed by Unwrap() error / Unwrap() []error (and any custom As(any) bool matcher). It does not look inside arbitrary fields (embedded or not) unless they’re reachable via Unwrap (or matched via a custom As). [1]
So for:
type AuthError struct{ APIError } // APIError has Error() but no Unwrap()errors.As(err, &apiErr) will not find the embedded APIError just because it’s embedded.
Idiomatic way to preserve errors.As(err, &apiErr) is to make AuthError a real wrapper by providing Unwrap (or by constructing it with %w):
type AuthError struct{ APIError }
func (e AuthError) Unwrap() error { return e.APIError } // or &e.APIError if you store a pointerThis is the standard pattern for wrapper types that want errors.Is/As to see the underlying error. [2][1]
Alternative (less common): implement As(any) bool on AuthError to “pretend” it’s an *APIError, but Unwrap is the usual/idiomatic approach. [1]
Sources:
[1] Go errors.As docs (error “tree”, Unwrap, custom As) — pkg.go.dev/errors
[2] Go 1.13 error wrapping article (wrapper types should add Unwrap) — go.dev/blog/go1.13-errors
Add Unwrap methods to enable error chain matching.
Embedding APIError alone does not make it accessible to errors.As. In Go, error chains are built only via Unwrap() methods (or custom As implementations). Without Unwrap, callers cannot use errors.As(err, &apiErr) to match through AuthError, ValidationError, or ForbiddenError. Additionally, ThrottleError should embed APIError to preserve StatusCode and Detail.
Add `Unwrap` methods to all error types
type AuthError struct{ APIError }
+func (e AuthError) Unwrap() error { return e.APIError }
type ThrottleError struct {
+ APIError
RetryAfter int
}
+func (e ThrottleError) Unwrap() error { return e.APIError }
type ValidationError struct{ APIError }
+func (e ValidationError) Unwrap() error { return e.APIError }
type ForbiddenError struct{ APIError }
+func (e ForbiddenError) Unwrap() error { return e.APIError }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // AuthError represents an authentication failure (HTTP 401). | |
| type AuthError struct{ APIError } | |
| type throttleErr struct { | |
| func (e AuthError) Error() string { | |
| return fmt.Sprintf("uploadcare: authentication failed: %s", e.Detail) | |
| } | |
| // ThrottleError represents a throttled request (HTTP 429). | |
| type ThrottleError struct { | |
| RetryAfter int | |
| } | |
| func (e throttleErr) Error() string { | |
| func (e ThrottleError) Error() string { | |
| if e.RetryAfter == 0 { | |
| return "Request was throttled." | |
| return "uploadcare: request throttled" | |
| } | |
| return fmt.Sprintf( | |
| "Request was throttled. Expected available in %d second", | |
| "uploadcare: request throttled, retry after %d seconds", | |
| e.RetryAfter, | |
| ) | |
| } | |
| type reqValidationErr struct{ respErr } | |
| // ValidationError represents a request validation failure (HTTP 400). | |
| type ValidationError struct{ APIError } | |
| func (e reqValidationErr) Error() string { | |
| return fmt.Sprintf("Request parameters validation error: %s", e.Details) | |
| func (e ValidationError) Error() string { | |
| return fmt.Sprintf("uploadcare: validation error: %s", e.Detail) | |
| } | |
| type reqForbiddenErr struct{ respErr } | |
| // ForbiddenError represents a forbidden request (HTTP 403). | |
| type ForbiddenError struct{ APIError } | |
| func (e ForbiddenError) Error() string { | |
| return fmt.Sprintf("uploadcare: forbidden: %s", e.Detail) | |
| } | |
| // AuthError represents an authentication failure (HTTP 401). | |
| type AuthError struct{ APIError } | |
| func (e AuthError) Unwrap() error { return e.APIError } | |
| func (e AuthError) Error() string { | |
| return fmt.Sprintf("uploadcare: authentication failed: %s", e.Detail) | |
| } | |
| // ThrottleError represents a throttled request (HTTP 429). | |
| type ThrottleError struct { | |
| APIError | |
| RetryAfter int | |
| } | |
| func (e ThrottleError) Unwrap() error { return e.APIError } | |
| func (e ThrottleError) Error() string { | |
| if e.RetryAfter == 0 { | |
| return "uploadcare: request throttled" | |
| } | |
| return fmt.Sprintf( | |
| "uploadcare: request throttled, retry after %d seconds", | |
| e.RetryAfter, | |
| ) | |
| } | |
| // ValidationError represents a request validation failure (HTTP 400). | |
| type ValidationError struct{ APIError } | |
| func (e ValidationError) Unwrap() error { return e.APIError } | |
| func (e ValidationError) Error() string { | |
| return fmt.Sprintf("uploadcare: validation error: %s", e.Detail) | |
| } | |
| // ForbiddenError represents a forbidden request (HTTP 403). | |
| type ForbiddenError struct{ APIError } | |
| func (e ForbiddenError) Unwrap() error { return e.APIError } | |
| func (e ForbiddenError) Error() string { | |
| return fmt.Sprintf("uploadcare: forbidden: %s", e.Detail) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ucare/error.go` around lines 33 - 67, Add Unwrap methods so callers can use
errors.As to access the embedded APIError: embed APIError into ThrottleError
(change ThrottleError to include APIError) and implement Unwrap on AuthError,
ValidationError, ForbiddenError and ThrottleError (use the same receiver style
as their Error methods) to return the embedded APIError as an error; this
exposes StatusCode/Detail for errors.As and preserves the existing Error()
behavior.
| func (c *restAPIClient) Do(req *http.Request, resdata interface{}) error { | ||
| tries := 0 | ||
| for tries := 1; ; tries++ { | ||
| if tries > 1 && req.GetBody != nil { | ||
| var err error | ||
| req.Body, err = req.GetBody() | ||
| if err != nil { | ||
| return err | ||
| } | ||
| } | ||
|
|
||
| try: | ||
| tries++ | ||
| log.Debugf("making %d request: %+v", tries, req) | ||
|
|
||
| if tries > 1 && req.GetBody != nil { | ||
| var err error | ||
| req.Body, err = req.GetBody() | ||
| resp, err := c.conn.Do(req) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| } | ||
|
|
||
| log.Debugf("making %d request: %+v", tries, req) | ||
|
|
||
| resp, err := c.conn.Do(req) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| if req.Body != nil { | ||
| defer req.Body.Close() | ||
| retry, err := c.handleResponse(resp, req, resdata, tries) | ||
| if err != nil || !retry { | ||
| return err | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Go's net/http package, what does the documentation say about reusing an http.Request with a non-nil Body after Client.Do, and when is Request.GetBody required for retries or redirects?
💡 Result:
-
Reusing an
http.Requestwith a non-nilBodyafterClient.Do: thenet/httpdocs state that the requestBody, if non-nil, will be closed by the underlyingTransport(even on errors), and it may be closed asynchronously afterDoreturns—so you should not expect to be able to reuse the sameRequest.BodyafterDowithout replacing/resetting it. [1] -
When
Request.GetBodyis required (retries / redirects):Request.GetBodyis an optional function that returns a fresh copy ofBody, and it is used for client requests when a redirect requires reading the body more than once (you still must setBodytoo). [2]- In
Client.Do, 307/308 redirects preserve the original method and body only ifRequest.GetBodyis defined (whereas 301/302/303 switch toGET/HEADwith no body). [1] - For retries you implement yourself,
GetBodyis effectively required any time the request body must be sent again (because the originalBodywill have been consumed/closed); the standard library explicitly documents this “read more than once” requirement in the redirect context. [2]
Sources:
[1] Go net/http docs — Client.Do (redirect behavior; request body closed, possibly asynchronously) (pkg.go.dev)
[2] Go net/http docs — Request.GetBody field description (used when body must be read more than once; still requires setting Body) (pkg.go.dev)
Citations:
Don't retry request bodies without GetBody replay support.
Client.Do is public and can receive custom requests with a body but no GetBody function defined. Go's net/http.Transport closes the request Body after each Do call, so retrying the same request on a 429 without resetting the body will attempt to send an already-closed/consumed body. Check that GetBody is available before retrying; otherwise, fail immediately.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ucare/restapi.go` around lines 100 - 121, In restAPIClient.Do, the loop can
attempt retries even when the request has a Body but no GetBody, causing replay
of a closed/consumed body; modify the retry logic in Do (and its interaction
with handleResponse) so that if handleResponse requests a retry but req.GetBody
is nil (and req.Body != nil), Do returns an error immediately instead of
looping; also ensure the existing reset code that calls req.GetBody to reassign
req.Body only runs when req.GetBody != nil and that the error returned explains
the lack of GetBody support for retrying the request.
| case 400, 404: | ||
| var err respErr | ||
| if e := json.NewDecoder(resp.Body).Decode(&err); e != nil { | ||
| return e | ||
| var apiErr APIError | ||
| if e := json.NewDecoder(resp.Body).Decode(&apiErr); e != nil { | ||
| return false, e | ||
| } | ||
| resp.Body.Close() | ||
| return err | ||
| apiErr.StatusCode = resp.StatusCode | ||
| return false, apiErr | ||
| case 401: | ||
| var err authErr | ||
| if e := json.NewDecoder(resp.Body).Decode(&err); e != nil { | ||
| return e | ||
| var authErr AuthError | ||
| if e := json.NewDecoder(resp.Body).Decode(&authErr); e != nil { | ||
| return false, e | ||
| } | ||
| authErr.StatusCode = 401 | ||
| return false, authErr | ||
| case 403: | ||
| var forbiddenErr ForbiddenError | ||
| if e := json.NewDecoder(resp.Body).Decode(&forbiddenErr); e != nil { | ||
| return false, e | ||
| } | ||
| resp.Body.Close() | ||
| return err | ||
| forbiddenErr.StatusCode = 403 | ||
| return false, forbiddenErr |
There was a problem hiding this comment.
Keep structured HTTP errors even when the body is empty or plain text.
These branches return the JSON decode error directly for 400/401/403/404, and the generic >=400 path falls back to StatusText instead of the raw body. That means callers can see EOF or lose the server's actual message rather than getting APIError/AuthError/ForbiddenError with a populated StatusCode. A read-once fallback to trimmed body text would keep the new error contract intact.
Also applies to: 185-191
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ucare/restapi.go` around lines 135 - 155, The error-handling branches that
decode JSON into APIError/AuthError/ForbiddenError (the switch on
resp.StatusCode handling 400/404, 401, 403) currently return the JSON decode
error directly, which loses the structured error when the body is empty or plain
text; change each branch to attempt json.Decode into the respective struct
(APIError, AuthError, ForbiddenError), and if decode fails, read resp.Body once
(io.ReadAll), trim the text, set that trimmed string into the struct's
Message/Detail field (or appropriate field), ensure StatusCode is set
(resp.StatusCode or 401/403), and return the populated struct; apply the same
fallback behavior used in the generic >=400 path (lines ~185-191) so callers
always receive a structured error object even when JSON decoding fails.
| if isNilResponseData(resdata) { | ||
| return false, nil | ||
| } | ||
|
|
||
| err = json.NewDecoder(resp.Body).Decode(&resdata) | ||
| if err != nil { | ||
| return err | ||
| if err := json.NewDecoder(resp.Body).Decode(resdata); err != nil { | ||
| return false, err | ||
| } | ||
| resp.Body.Close() | ||
|
|
||
| return nil | ||
| return false, nil | ||
| } | ||
|
|
||
| func isNilResponseData(resdata interface{}) bool { | ||
| if resdata == nil { | ||
| return true | ||
| } | ||
|
|
||
| v := reflect.ValueOf(resdata) | ||
| switch v.Kind() { | ||
| case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, | ||
| reflect.Ptr, reflect.Slice: | ||
| return v.IsNil() | ||
| default: | ||
| return false | ||
| } |
There was a problem hiding this comment.
Only literal nil should mean “skip decoding”.
isNilResponseData currently treats typed nil maps/slices/pointers as “no destination”, so client.Do(req, out) can return nil on a 2xx response while dropping the payload entirely. That silently masks the same caller error the upload client now exposes.
🧩 Proposed fix
func isNilResponseData(resdata interface{}) bool {
- if resdata == nil {
- return true
- }
-
- v := reflect.ValueOf(resdata)
- switch v.Kind() {
- case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map,
- reflect.Ptr, reflect.Slice:
- return v.IsNil()
- default:
- return false
- }
+ return resdata == nil
}// remove this import once the helper no longer uses reflection
import "reflect"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ucare/restapi.go` around lines 195 - 218, The helper is incorrectly treating
typed nils as "no destination", causing responses to be discarded; update
isNilResponseData to return true only when resdata == nil (remove the
reflect-based Kind/IsNil checks and the reflect import) so that typed nil
pointers/slices/maps still allow decoding to occur (adjust any tests if needed);
ensure the Do flow that calls isNilResponseData (the caller around
json.NewDecoder(resp.Body).Decode(resdata)) will attempt Decode when resdata is
a typed nil.
| func (s service) Upload(ctx context.Context, params UploadParams) (FileInfo, error) { | ||
| threshold := DefaultMultipartThreshold | ||
| if params.MultipartThreshold != nil { | ||
| threshold = *params.MultipartThreshold | ||
| } |
There was a problem hiding this comment.
Reject negative Size values before threshold selection.
Size < 0 currently falls through to the direct-upload branch because params.Size > threshold is false. That silently disables multipart selection for invalid input and can route a large payload to the wrong endpoint.
🛡️ Proposed fix
func (s service) Upload(ctx context.Context, params UploadParams) (FileInfo, error) {
+ if params.Size < 0 {
+ return FileInfo{}, errors.New("upload: size must be non-negative")
+ }
+
threshold := DefaultMultipartThreshold// add alongside the existing imports
import "errors"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@upload/upload.go` around lines 37 - 41, The Upload method currently lets
params.Size < 0 fall through to direct upload; validate and reject negative Size
at the start of Upload (before computing threshold) by returning an error (use
the errors package) when params.Size is negative. Update the Upload function to
check params.Size < 0 and return a clear error (e.g., "invalid size: must be
non-negative") so multipart selection logic that uses MultipartThreshold and
DefaultMultipartThreshold is not bypassed; reference the Upload method and
UploadParams.Size/MultipartThreshold symbols when adding the check.
| func (s service) uploadDirect(ctx context.Context, params UploadParams) (FileInfo, error) { | ||
| id, err := s.File(ctx, FileParams{ | ||
| Data: params.Data, | ||
| Name: params.Name, | ||
| ContentType: params.ContentType, | ||
| ToStore: params.ToStore, | ||
| Metadata: params.Metadata, | ||
| }) | ||
| if err != nil { | ||
| return FileInfo{}, err | ||
| } | ||
| return s.FileInfo(ctx, id) |
There was a problem hiding this comment.
Don't hide a successful direct upload behind the follow-up lookup.
After s.File() returns, the file already exists. If the immediate s.FileInfo() call then fails, Upload surfaces only that read error and callers are likely to retry the whole operation, duplicating the upload. Please preserve the uploaded file ID on this path or avoid the extra lookup here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@upload/upload.go` around lines 58 - 69, The current uploadDirect calls
s.File(...) then immediately calls s.FileInfo(...) and if the lookup fails the
upload is reported as an error, which can cause duplicate uploads; change
uploadDirect to preserve and return the successful upload id when s.File
succeeds instead of hiding it behind the follow-up lookup: after s.File(...)
returns id, call s.FileInfo(...) only as an optional best-effort (e.g., if it
succeeds return that FileInfo), but if s.FileInfo fails return FileInfo with at
least the ID set (e.g., FileInfo{ID: id}) and nil error (or return the id
alongside a wrapped error that includes the id) so callers know the upload
succeeded; update function uploadDirect and any callers expecting full metadata
accordingly (references: uploadDirect, s.File, s.FileInfo, UploadParams,
FileInfo).
CDNBasefield toucare.Configfor explicit override; auto-derived when emptyConfiginresolveConfigto prevent stale CDN base when reusing a config template across multiple projectsNew public API:
ucare.DefaultCDNDomain—"ucarecd.net"ucare.CnamePrefix(pubkey string) stringucare.CDNBaseURL(pubkey string) stringucare.Config.CDNBase stringSummary by CodeRabbit
Release Notes
New Features
Breaking Changes
Improvements