Skip to content

Phase 5: automatic per-project CDN base URL#29

Open
alxgsv wants to merge 14 commits intomasterfrom
phase5-automatic-cdn-base
Open

Phase 5: automatic per-project CDN base URL#29
alxgsv wants to merge 14 commits intomasterfrom
phase5-automatic-cdn-base

Conversation

@alxgsv
Copy link
Copy Markdown
Contributor

@alxgsv alxgsv commented Mar 16, 2026

  • Compute per-project CDN base URL from the public key (SHA256 → base36 → first 10 chars), matching pyuploadcare and file-uploader implementations
  • Add CDNBase field to ucare.Config for explicit override; auto-derived when empty
  • Shallow-copy Config in resolveConfig to prevent stale CDN base when reusing a config template across multiple projects

New public API:

  • ucare.DefaultCDNDomain"ucarecd.net"
  • ucare.CnamePrefix(pubkey string) string
  • ucare.CDNBaseURL(pubkey string) string
  • ucare.Config.CDNBase string

Summary by CodeRabbit

Release Notes

New Features

  • Added addon support: ClamAV and RemoveBG integrations for file processing
  • File metadata management: set, get, list, and delete custom key-value metadata
  • File group deletion capability
  • Conversion enhancements: save multi-page results as groups; new document and video path builders
  • Improved CDN URL generation from public keys

Breaking Changes

  • File info retrieval now requires an additional parameter
  • Webhook deletion now uses ID instead of URL
  • Throttled requests no longer retry by default; enable via configuration

Improvements

  • Configurable automatic retries with exponential backoff for throttled requests
  • Enhanced error types for better error handling
  • Metadata support in file uploads (direct and multipart)

alxgsv added 14 commits March 13, 2026 16:19
Previously, any status not explicitly handled (e.g. 409, 403, 500, 502)
fell through to the JSON decode path, silently producing zero-value
results. Now all >= 400 statuses return an error.

Also fix Decode(&resdata) -> Decode(resdata) to avoid unnecessary
*interface{} indirection.
Extract handleResponse method with defer resp.Body.Close() to
guarantee the response body is closed on all paths including nil
resdata, error decoding, and retry. Replace goto with for loop.
Derive CDN base URL from the public key (SHA256 → base36 → first 10 chars).
Config.CDNBase can be set explicitly to override the default. Shallow-copy Config in resolveConfig to avoid mutating a shared template when creating clients for multiple projects.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 16, 2026

Walkthrough

This pull request introduces new addon and metadata services for file management, breaks file.Info() API to accept optional parameters, changes webhook deletion to use ID instead of URL, adds structured error types with retry configuration, extends upload API with metadata support and automatic multipart selection, and provides CDN utilities for per-project domain derivation.

Changes

Cohort / File(s) Summary
Addon Service
addon/addon.go, addon/log.go, addon/service.go, addon/service_test.go, test/addon.go
New addon execution and status polling service supporting RemoveBG, ClamAV, Rekognition addons with request/response types, logging control, and integration tests.
Metadata Service
metadata/metadata.go, metadata/log.go, metadata/service.go, metadata/service_test.go, metadata/validation.go, test/metadata.go
New metadata key-value service with List, Get, Set, Delete operations, key validation, dot-segment escaping, logging control, and comprehensive test coverage.
File Service API Changes
file/file.go, file/service.go, file/list.go, file/service_test.go, README.md, ucare/doc.go, test/file.go
Breaking change: Info() method now accepts *InfoParams for optional include query parameter; ListParams now supports Include field; updated documentation and test examples.
Conversion Enhancements
conversion/conversion.go, conversion/conversion_test.go, conversion/path.go, conversion/path_test.go
Added SaveInGroup field to Params for multi-page results; new DocumentPathOptions and VideoPathOptions builders with conditional segment composition.
Upload API Metadata
upload/direct.go, upload/fromurl.go, upload/multipart.go, upload/service.go, upload/upload.go, upload/upload_test.go
Extended FileParams, FromURLParams, MultipartParams with Metadata field; new Upload service method with automatic multipart threshold selection and metadata pass-through to both direct and multipart paths.
Webhook Service
webhook/webhook.go, webhook/service.go, webhook/webhook_test.go, webhook/service_test.go, test/webhook.go
Breaking change: Delete() now accepts int64 ID instead of target URL; added EventFileStored, EventFileDeleted, EventFileInfoUpdated constants; deprecated EventFileInfected.
Group Service
group/group.go, group/service.go, group/group_test.go, test/group.go
New Delete method to remove group metadata via DELETE endpoint.
Error Handling & Retry
ucare/error.go, ucare/error_test.go, ucare/retry.go, ucare/retry_test.go, ucare/restapi.go, ucare/restapi_test.go, ucare/uploadapi.go, ucare/uploadapi_test.go
New structured error types (APIError, AuthError, ThrottleError, ValidationError, ForbiddenError) with Detail fields; RetryConfig for max retries and wait seconds; enhanced response handling with exponential backoff and Retry-After support; no retry by default on 429.
CDN Utilities
ucare/cdn.go, ucare/cdn_test.go, ucare/client.go
New CnamePrefix and CDNBaseURL functions for per-project CDN domain derivation from public key; Config extended with Retry and CDNBase fields; auto-derived CDNBase when not specified.
Internal Codec
internal/codec/codec.go, internal/codec/codec_test.go
Added bracket notation support for map[string]string form encoding (metadata[key]=value); new tests for metadata field encoding in queries and form data.
Configuration
internal/config/config.go
Removed MaxThrottleRetries constant (retry behavior now opt-in via Config.Retry).
Documentation & Integration
CHANGELOG.md, test/integration_test.go, test/testenv/runner.go
Updated CHANGELOG with breaking changes and features; extended integration test suite with new addon, metadata, group operations; extended Runner with Metadata and Addon services; added AddonRequestID to Artifacts.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client Code
    participant AddonSvc as Addon Service
    participant API as REST API
    participant Queue as Job Queue
    
    Client->>AddonSvc: Execute(ctx, "clamav", params)
    activate AddonSvc
    AddonSvc->>API: POST /addons/clamav/execute/
    activate API
    API-->>AddonSvc: {requestID: "abc123"}
    deactivate API
    AddonSvc-->>Client: ExecuteResult{RequestID}
    deactivate AddonSvc
    
    Note over Client: Poll for status
    loop Until Done/Error
        Client->>AddonSvc: Status(ctx, "clamav", "abc123")
        activate AddonSvc
        AddonSvc->>API: GET /addons/clamav/execute/status/?RequestID=abc123
        activate API
        API-->>AddonSvc: {status: "in_progress|done|error", result: {...}}
        deactivate API
        AddonSvc-->>Client: StatusResult
        deactivate AddonSvc
        Client->>Client: Wait + Retry
    end
Loading
sequenceDiagram
    participant Client as Client Code
    participant UploadSvc as Upload Service
    participant Threshold as Threshold Check
    participant DirectAPI as Direct Upload API
    participant MultipartAPI as Multipart API
    
    Client->>UploadSvc: Upload(ctx, UploadParams{Size, Metadata, ...})
    activate UploadSvc
    UploadSvc->>Threshold: Size >= threshold?
    activate Threshold
    alt Size < Threshold
        Threshold-->>UploadSvc: Use Direct
        UploadSvc->>DirectAPI: POST /base/ with file + metadata[key]=value
        activate DirectAPI
        DirectAPI-->>UploadSvc: FileInfo
        deactivate DirectAPI
    else Size >= Threshold
        Threshold-->>UploadSvc: Use Multipart
        UploadSvc->>MultipartAPI: POST /multipart/start/ with metadata
        activate MultipartAPI
        MultipartAPI-->>UploadSvc: {upload_id}
        deactivate MultipartAPI
        UploadSvc->>MultipartAPI: Upload chunks
        UploadSvc->>MultipartAPI: POST /multipart/complete/
        MultipartAPI-->>UploadSvc: FileInfo
    end
    deactivate Threshold
    UploadSvc-->>Client: FileInfo
    deactivate UploadSvc
Loading
sequenceDiagram
    participant Client as Client Code
    participant MetadataSvc as Metadata Service
    participant API as REST API
    participant Validator as Key Validator
    
    Client->>MetadataSvc: Set(ctx, fileUUID, "key", "value")
    activate MetadataSvc
    MetadataSvc->>Validator: validateKey("key")
    activate Validator
    Validator-->>MetadataSvc: Valid
    deactivate Validator
    MetadataSvc->>API: PUT /files/uuid/metadata/key with JSON body
    activate API
    API-->>MetadataSvc: "value"
    deactivate API
    MetadataSvc-->>Client: string result
    deactivate MetadataSvc
    
    Client->>MetadataSvc: Get(ctx, fileUUID, "key")
    activate MetadataSvc
    MetadataSvc->>Validator: validateKey("key")
    Validator-->>MetadataSvc: Valid
    MetadataSvc->>API: GET /files/uuid/metadata/key
    activate API
    API-->>MetadataSvc: "value"
    deactivate API
    MetadataSvc-->>Client: "value"
    deactivate MetadataSvc
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.39% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Phase 5: automatic per-project CDN base URL' directly and specifically describes the main change in the PR: automatic derivation of per-project CDN base URLs and related configuration additions.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch phase5-automatic-cdn-base
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
upload/service.go (1)

27-35: ⚠️ Potential issue | 🟠 Major

Adding Upload to the exported Service interface is a breaking change.

In Go, any downstream package that implements or mocks upload.Service will fail to compile—their implementations no longer satisfy the interface. To maintain compatibility, consider:

  • Introduce a separate interface (e.g., UploadService) with the new method and embed Service in it
  • Use optional interface pattern: define an interface for the new capability and probe for it at runtime (e.g., if uploader, ok := service.(interface{Upload(...)}); ok { ... })
  • Seal the interface with an unexported method if you only expect your own implementations
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@upload/service.go` around lines 27 - 35, You added Upload to the exported
Service interface which is a breaking change; to fix, remove Upload from Service
and either (a) create a new exported interface UploadService that declares
Upload and have concrete implementations implement both Service and
UploadService where needed, or (b) keep Service unchanged and implement an
optional uploader interface (e.g., interface{ Upload(context.Context,
UploadParams) (FileInfo, error) }) that callers probe at runtime with a type
assertion; update call sites to use the new UploadService where compile-time
guarantee is required or to perform the runtime check against the optional
uploader interface.
🧹 Nitpick comments (6)
internal/codec/codec_test.go (1)

272-289: Mirror the nil/empty-map cases on the query path.

Form-data now pins nil and empty metadata behavior, but query encoding only tests a populated map. Adding the matching query cases would catch regressions that start emitting metadata= or stray metadata[...] keys.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/codec/codec_test.go` around lines 272 - 289, Extend
TestEncodeReqQuery_WithMetadata to mirror the nil and empty-map form-data cases:
add subtests (or additional test cases) that call codec.EncodeReqQuery with
Metadata set to nil and to an empty map, then assert the request query contains
no "metadata" key and no "metadata=" or "metadata[...]" entries; reference the
existing TestEncodeReqQuery_WithMetadata and codec.EncodeReqQuery to locate
where to add these checks and use req.URL.Query() to validate absence of
metadata parameters.
ucare/cdn_test.go (1)

6-6: Same alias inconsistency as webhook/webhook_test.go.

Consider using require directly for consistency and clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ucare/cdn_test.go` at line 6, The test import currently aliases the require
package as "assert"; change the import to use the canonical name "require"
(i.e., import "github.com/stretchr/testify/require" as require) and update all
usages in this file (e.g., any assert.* calls) to require.* to match the
webhook/webhook_test.go style and keep aliasing consistent across tests.
webhook/webhook_test.go (1)

6-6: Consider clarifying the import alias.

Aliasing require as assert may confuse readers since these packages have different behaviors (require stops execution on failure, assert does not). Either use the default name or alias it as require.

♻️ Suggested change
-	assert "github.com/stretchr/testify/require"
+	"github.com/stretchr/testify/require"

Then update usages to require.Equal(...).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@webhook/webhook_test.go` at line 6, The import currently aliases the require
package as assert (assert "github.com/stretchr/testify/require"); change the
import to use the correct alias/name (require
"github.com/stretchr/testify/require" or just
github.com/stretchr/testify/require) and then update all test calls that use
assert.* to require.* (e.g., require.Equal, require.NoError, etc.) in
webhook_test.go so the semantics match the require package that aborts on
failure.
test/metadata.go (1)

11-40: Consider using assert.NoError for error assertions.

Using assert.NoError(t, err) instead of assert.Equal(t, nil, err) is more idiomatic in Go tests and provides better error messages when assertions fail.

♻️ Suggested improvement
 func metadataSet(t *testing.T, r *testenv.Runner) {
 	ctx := context.Background()
 	val, err := r.Metadata.Set(ctx, r.Artifacts.Files[0].ID, "test_key", "test_value")
-	assert.Equal(t, nil, err)
+	assert.NoError(t, err)
 	assert.Equal(t, "test_value", val)
 }

 func metadataGet(t *testing.T, r *testenv.Runner) {
 	ctx := context.Background()
 	val, err := r.Metadata.Get(ctx, r.Artifacts.Files[0].ID, "test_key")
-	assert.Equal(t, nil, err)
+	assert.NoError(t, err)
 	assert.Equal(t, "test_value", val)
 }

 func metadataList(t *testing.T, r *testenv.Runner) {
 	ctx := context.Background()
 	data, err := r.Metadata.List(ctx, r.Artifacts.Files[0].ID)
-	assert.Equal(t, nil, err)
+	assert.NoError(t, err)
 	assert.Equal(t, "test_value", data["test_key"])
 }

 func metadataDelete(t *testing.T, r *testenv.Runner) {
 	ctx := context.Background()
 	err := r.Metadata.Delete(ctx, r.Artifacts.Files[0].ID, "test_key")
-	assert.Equal(t, nil, err)
+	assert.NoError(t, err)

 	// verify it's gone
 	_, err = r.Metadata.Get(ctx, r.Artifacts.Files[0].ID, "test_key")
 	assert.NotNil(t, err)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/metadata.go` around lines 11 - 40, Replace the nil-equality error checks
with assert.NoError for all test helper functions: in metadataSet, metadataGet,
metadataList, and metadataDelete, change the assertions that currently use
assert.Equal(t, nil, err) to assert.NoError(t, err) when calling r.Metadata.Set,
r.Metadata.Get, r.Metadata.List and r.Metadata.Delete; keep the existing value
assertions (e.g., assert.Equal for "test_value") and the final assert.NotNil in
metadataDelete that verifies the key was removed.
webhook/service_test.go (1)

18-65: Consider extracting shared test client to reduce duplication.

The testClient implementation is nearly identical to the one in addon/service_test.go. Consider extracting it to a shared internal test helper package (e.g., internal/testutil) if more service tests follow this pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@webhook/service_test.go` around lines 18 - 65, The testClient, its methods
NewRequest and Do, plus newTestService and usage of NewService are duplicated
across tests; extract them into a shared internal test helper package (e.g.,
internal/testutil) and import it in both webhook and addon tests: move the
testClient type, NewRequest, Do, and newTestService helpers into that package,
adjust their signatures if needed to avoid package-specific types, update tests
to call testutil.NewTestService (or equivalent) and testutil.TestClient to
reduce duplication and centralize shared test logic.
upload/upload_test.go (1)

156-161: The multipart tests still pass if no chunk is ever uploaded.

Each /multipart/start/ fixture returns an empty parts list, so these cases only prove that the code hits start/complete. A regression that skips the actual part upload would still satisfy the current assertions. Return at least one fake part URL and assert that it is requested in the multipart paths.

Also applies to: 203-206, 288-291, 367-370

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@upload/upload_test.go` around lines 156 - 161, The multipart test fixtures
return an empty "parts" slice which lets tests pass without any chunk upload;
modify the multipart/start handler(s) in upload_test.go (the case matching
"/multipart/start/") to return at least one fake part URL (e.g., construct a
part URL using the test server's base URL and a predictable path) instead of an
empty []string, and then update the tests to assert that this fake part URL is
requested by checking the recorded multipart request paths (the
variable/structure collecting multipart requests such as multipartHit or the
request path recorder) to ensure the part upload was actually invoked; apply the
same change to the other multipart/start fixtures referenced in the comment so
each returns a non-empty parts list and each test verifies the corresponding
part upload path was hit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@conversion/path.go`:
- Around line 26-29: Dynamic URL segments are being concatenated raw (e.g.
fmt.Fprintf(&b, "%s/document/-/format/%s/", opts.UUID, format) and the opts.Page
segment), which can break paths; import net/url and wrap dynamic parts with
url.PathEscape (e.g. url.PathEscape(opts.UUID), url.PathEscape(format)) before
writing them into the buffer, and apply the same escaping to all other dynamic
segments in the other block (lines handling opts.Page and the 58-73 section) so
reserved characters are percent-encoded.

In `@file/file.go`:
- Around line 27-35: The new third parameter on service.Info is a breaking
change; restore the original signature Info(ctx context.Context, fileID string)
(Info, error) and add a new method InfoWithParams(ctx context.Context, fileID
string, params *InfoParams) (Info, error) that contains the logic that accepts
params (or delegates to a shared internal helper), and update any public service
interface to include InfoWithParams while keeping Info unchanged so existing
callers, method expressions, and implementations keep compiling.

In `@internal/codec/codec.go`:
- Around line 129-132: In EncodeReqQuery, remove the panic triggered when
f.Interface() does not assert to map[string]string (the wrongMapType panic) and
instead return a descriptive error from EncodeReqQuery; locate the branch around
the m, ok := f.Interface().(map[string]string) check and on !ok return
fmt.Errorf or a wrapped error that includes the actual type (use
reflect.TypeOf(f.Interface())) and mention unsupported map type so the caller
can handle the failure gracefully.

In `@ucare/error.go`:
- Around line 33-67: Add Unwrap methods so callers can use errors.As to access
the embedded APIError: embed APIError into ThrottleError (change ThrottleError
to include APIError) and implement Unwrap on AuthError, ValidationError,
ForbiddenError and ThrottleError (use the same receiver style as their Error
methods) to return the embedded APIError as an error; this exposes
StatusCode/Detail for errors.As and preserves the existing Error() behavior.

In `@ucare/restapi.go`:
- Around line 135-155: The error-handling branches that decode JSON into
APIError/AuthError/ForbiddenError (the switch on resp.StatusCode handling
400/404, 401, 403) currently return the JSON decode error directly, which loses
the structured error when the body is empty or plain text; change each branch to
attempt json.Decode into the respective struct (APIError, AuthError,
ForbiddenError), and if decode fails, read resp.Body once (io.ReadAll), trim the
text, set that trimmed string into the struct's Message/Detail field (or
appropriate field), ensure StatusCode is set (resp.StatusCode or 401/403), and
return the populated struct; apply the same fallback behavior used in the
generic >=400 path (lines ~185-191) so callers always receive a structured error
object even when JSON decoding fails.
- Around line 195-218: The helper is incorrectly treating typed nils as "no
destination", causing responses to be discarded; update isNilResponseData to
return true only when resdata == nil (remove the reflect-based Kind/IsNil checks
and the reflect import) so that typed nil pointers/slices/maps still allow
decoding to occur (adjust any tests if needed); ensure the Do flow that calls
isNilResponseData (the caller around json.NewDecoder(resp.Body).Decode(resdata))
will attempt Decode when resdata is a typed nil.
- Around line 100-121: In restAPIClient.Do, the loop can attempt retries even
when the request has a Body but no GetBody, causing replay of a closed/consumed
body; modify the retry logic in Do (and its interaction with handleResponse) so
that if handleResponse requests a retry but req.GetBody is nil (and req.Body !=
nil), Do returns an error immediately instead of looping; also ensure the
existing reset code that calls req.GetBody to reassign req.Body only runs when
req.GetBody != nil and that the error returned explains the lack of GetBody
support for retrying the request.

In `@upload/upload.go`:
- Around line 58-69: The current uploadDirect calls s.File(...) then immediately
calls s.FileInfo(...) and if the lookup fails the upload is reported as an
error, which can cause duplicate uploads; change uploadDirect to preserve and
return the successful upload id when s.File succeeds instead of hiding it behind
the follow-up lookup: after s.File(...) returns id, call s.FileInfo(...) only as
an optional best-effort (e.g., if it succeeds return that FileInfo), but if
s.FileInfo fails return FileInfo with at least the ID set (e.g., FileInfo{ID:
id}) and nil error (or return the id alongside a wrapped error that includes the
id) so callers know the upload succeeded; update function uploadDirect and any
callers expecting full metadata accordingly (references: uploadDirect, s.File,
s.FileInfo, UploadParams, FileInfo).
- Around line 37-41: The Upload method currently lets params.Size < 0 fall
through to direct upload; validate and reject negative Size at the start of
Upload (before computing threshold) by returning an error (use the errors
package) when params.Size is negative. Update the Upload function to check
params.Size < 0 and return a clear error (e.g., "invalid size: must be
non-negative") so multipart selection logic that uses MultipartThreshold and
DefaultMultipartThreshold is not bypassed; reference the Upload method and
UploadParams.Size/MultipartThreshold symbols when adding the check.

---

Outside diff comments:
In `@upload/service.go`:
- Around line 27-35: You added Upload to the exported Service interface which is
a breaking change; to fix, remove Upload from Service and either (a) create a
new exported interface UploadService that declares Upload and have concrete
implementations implement both Service and UploadService where needed, or (b)
keep Service unchanged and implement an optional uploader interface (e.g.,
interface{ Upload(context.Context, UploadParams) (FileInfo, error) }) that
callers probe at runtime with a type assertion; update call sites to use the new
UploadService where compile-time guarantee is required or to perform the runtime
check against the optional uploader interface.

---

Nitpick comments:
In `@internal/codec/codec_test.go`:
- Around line 272-289: Extend TestEncodeReqQuery_WithMetadata to mirror the nil
and empty-map form-data cases: add subtests (or additional test cases) that call
codec.EncodeReqQuery with Metadata set to nil and to an empty map, then assert
the request query contains no "metadata" key and no "metadata=" or
"metadata[...]" entries; reference the existing TestEncodeReqQuery_WithMetadata
and codec.EncodeReqQuery to locate where to add these checks and use
req.URL.Query() to validate absence of metadata parameters.

In `@test/metadata.go`:
- Around line 11-40: Replace the nil-equality error checks with assert.NoError
for all test helper functions: in metadataSet, metadataGet, metadataList, and
metadataDelete, change the assertions that currently use assert.Equal(t, nil,
err) to assert.NoError(t, err) when calling r.Metadata.Set, r.Metadata.Get,
r.Metadata.List and r.Metadata.Delete; keep the existing value assertions (e.g.,
assert.Equal for "test_value") and the final assert.NotNil in metadataDelete
that verifies the key was removed.

In `@ucare/cdn_test.go`:
- Line 6: The test import currently aliases the require package as "assert";
change the import to use the canonical name "require" (i.e., import
"github.com/stretchr/testify/require" as require) and update all usages in this
file (e.g., any assert.* calls) to require.* to match the
webhook/webhook_test.go style and keep aliasing consistent across tests.

In `@upload/upload_test.go`:
- Around line 156-161: The multipart test fixtures return an empty "parts" slice
which lets tests pass without any chunk upload; modify the multipart/start
handler(s) in upload_test.go (the case matching "/multipart/start/") to return
at least one fake part URL (e.g., construct a part URL using the test server's
base URL and a predictable path) instead of an empty []string, and then update
the tests to assert that this fake part URL is requested by checking the
recorded multipart request paths (the variable/structure collecting multipart
requests such as multipartHit or the request path recorder) to ensure the part
upload was actually invoked; apply the same change to the other multipart/start
fixtures referenced in the comment so each returns a non-empty parts list and
each test verifies the corresponding part upload path was hit.

In `@webhook/service_test.go`:
- Around line 18-65: The testClient, its methods NewRequest and Do, plus
newTestService and usage of NewService are duplicated across tests; extract them
into a shared internal test helper package (e.g., internal/testutil) and import
it in both webhook and addon tests: move the testClient type, NewRequest, Do,
and newTestService helpers into that package, adjust their signatures if needed
to avoid package-specific types, update tests to call testutil.NewTestService
(or equivalent) and testutil.TestClient to reduce duplication and centralize
shared test logic.

In `@webhook/webhook_test.go`:
- Line 6: The import currently aliases the require package as assert (assert
"github.com/stretchr/testify/require"); change the import to use the correct
alias/name (require "github.com/stretchr/testify/require" or just
github.com/stretchr/testify/require) and then update all test calls that use
assert.* to require.* (e.g., require.Equal, require.NoError, etc.) in
webhook_test.go so the semantics match the require package that aborts on
failure.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a969f19f-6dcc-43e5-8e44-4335a806e5f5

📥 Commits

Reviewing files that changed from the base of the PR and between 766afd6 and 87a4952.

📒 Files selected for processing (54)
  • CHANGELOG.md
  • README.md
  • addon/addon.go
  • addon/log.go
  • addon/service.go
  • addon/service_test.go
  • conversion/conversion.go
  • conversion/conversion_test.go
  • conversion/path.go
  • conversion/path_test.go
  • file/file.go
  • file/list.go
  • file/service.go
  • file/service_test.go
  • group/group.go
  • group/group_test.go
  • group/service.go
  • internal/codec/codec.go
  • internal/codec/codec_test.go
  • internal/config/config.go
  • metadata/log.go
  • metadata/metadata.go
  • metadata/service.go
  • metadata/service_test.go
  • metadata/validation.go
  • test/addon.go
  • test/file.go
  • test/group.go
  • test/integration_test.go
  • test/metadata.go
  • test/testenv/runner.go
  • test/webhook.go
  • ucare/cdn.go
  • ucare/cdn_test.go
  • ucare/client.go
  • ucare/doc.go
  • ucare/error.go
  • ucare/error_test.go
  • ucare/restapi.go
  • ucare/restapi_test.go
  • ucare/retry.go
  • ucare/retry_test.go
  • ucare/uploadapi.go
  • ucare/uploadapi_test.go
  • upload/direct.go
  • upload/fromurl.go
  • upload/multipart.go
  • upload/service.go
  • upload/upload.go
  • upload/upload_test.go
  • webhook/service.go
  • webhook/service_test.go
  • webhook/webhook.go
  • webhook/webhook_test.go
💤 Files with no reviewable changes (1)
  • internal/config/config.go

Comment on lines +26 to +29
fmt.Fprintf(&b, "%s/document/-/format/%s/", opts.UUID, format)

if opts.Page > 0 {
fmt.Fprintf(&b, "-/page/%d/", opts.Page)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Escape dynamic URL path segments before concatenation.

These builders currently inject raw values into URL paths. Reserved characters in any option can break the path or change semantics.

Proposed fix
 import (
 	"fmt"
+	"net/url"
 	"strings"
 )
+
+func escPathSegment(v string) string { return url.PathEscape(v) }
@@
-	fmt.Fprintf(&b, "%s/document/-/format/%s/", opts.UUID, format)
+	fmt.Fprintf(&b, "%s/document/-/format/%s/", escPathSegment(opts.UUID), escPathSegment(format))
@@
-	fmt.Fprintf(&b, "%s/video/-/format/%s/", opts.UUID, opts.Format)
+	fmt.Fprintf(&b, "%s/video/-/format/%s/", escPathSegment(opts.UUID), escPathSegment(opts.Format))
@@
-		fmt.Fprintf(&b, "-/size/%s/", opts.Size)
+		fmt.Fprintf(&b, "-/size/%s/", escPathSegment(opts.Size))
 		if opts.ResizeMode != "" {
-			fmt.Fprintf(&b, "%s/", opts.ResizeMode)
+			fmt.Fprintf(&b, "%s/", escPathSegment(opts.ResizeMode))
 		}
@@
-		fmt.Fprintf(&b, "-/quality/%s/", opts.Quality)
+		fmt.Fprintf(&b, "-/quality/%s/", escPathSegment(opts.Quality))
@@
-		fmt.Fprintf(&b, "-/cut/%s/%s/", opts.CutStart, opts.CutLength)
+		fmt.Fprintf(&b, "-/cut/%s/%s/", escPathSegment(opts.CutStart), escPathSegment(opts.CutLength))

Also applies to: 58-73

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@conversion/path.go` around lines 26 - 29, Dynamic URL segments are being
concatenated raw (e.g. fmt.Fprintf(&b, "%s/document/-/format/%s/", opts.UUID,
format) and the opts.Page segment), which can break paths; import net/url and
wrap dynamic parts with url.PathEscape (e.g. url.PathEscape(opts.UUID),
url.PathEscape(format)) before writing them into the buffer, and apply the same
escaping to all other dynamic segments in the other block (lines handling
opts.Page and the 58-73 section) so reserved characters are percent-encoded.

Comment on lines 27 to +35
func (s service) Info(
ctx context.Context,
fileID string,
params *InfoParams,
) (data Info, err error) {
var reqParams ucare.ReqEncoder
if params != nil {
reqParams = params
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In Go, is changing an exported method from Info(ctx, fileID)toInfo(ctx, fileID, params *InfoParams)source-compatible? Is a variadic parameter such asparams ...*InfoParams a common way to evolve the API without breaking existing call sites?

💡 Result:

No.

  • Changing Info(ctx, fileID) to Info(ctx, fileID, params *InfoParams) is not source-compatible: all existing call sites now have “not enough arguments” and won’t compile (it’s a signature change, which is inherently breaking). (go.dev)
  • Changing it to Info(ctx, fileID, params ...*InfoParams) is call-compatible with existing call sites (variadic functions “may be invoked with zero or more arguments” for the ... parameter). (go.dev)

However, the variadic trick is still not fully source-/backward-compatible in Go, because it changes the function/method type. That can break code that doesn’t call it directly, e.g. assigning the method/function to a variable of the old type, using method expressions, or satisfying an interface with the old method signature. The Go team explicitly warns that “call compatibility is not enough for backward compatibility” and that there is “no backward-compatible change you can make to a function’s signature.” (go.dev)

Common evolution patterns instead:

  • Add a new method (often with a suffix) and keep the old one (like Query + QueryContext). (go.dev)
  • If you anticipate future knobs, accept an options/config struct (possibly nil) from the start, so new fields can be added compatibly. (go.dev)

Citations:


This parameter addition is a compile-time breaking change.

Existing Info(ctx, fileID) callers won't compile until updated with a third argument. While a variadic params ...*InfoParams restores call compatibility for direct callers, it changes the method type signature, which still breaks code that assigns the method to a variable of the old type, uses method expressions, or satisfies an interface with the original signature.

Go's guidance on API evolution recommends either: (1) add a new method (e.g., InfoWithParams) and keep the old one, or (2) if you anticipated future options, design the method to accept an options/config struct from the start (possibly nil). This avoids signature changes altogether.

If using a variadic parameter
-func (s service) Info(
-	ctx context.Context,
-	fileID string,
-	params *InfoParams,
-) (data Info, err error) {
+func (s service) Info(
+	ctx context.Context,
+	fileID string,
+	params ...*InfoParams,
+) (data Info, err error) {
 	var reqParams ucare.ReqEncoder
-	if params != nil {
-		reqParams = params
+	if len(params) > 0 && params[0] != nil {
+		reqParams = params[0]
 	}

Apply the same signature change to any public service interface this method implements.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (s service) Info(
ctx context.Context,
fileID string,
params *InfoParams,
) (data Info, err error) {
var reqParams ucare.ReqEncoder
if params != nil {
reqParams = params
}
func (s service) Info(
ctx context.Context,
fileID string,
params ...*InfoParams,
) (data Info, err error) {
var reqParams ucare.ReqEncoder
if len(params) > 0 && params[0] != nil {
reqParams = params[0]
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@file/file.go` around lines 27 - 35, The new third parameter on service.Info
is a breaking change; restore the original signature Info(ctx context.Context,
fileID string) (Info, error) and add a new method InfoWithParams(ctx
context.Context, fileID string, params *InfoParams) (Info, error) that contains
the logic that accepts params (or delegates to a shared internal helper), and
update any public service interface to include InfoWithParams while keeping Info
unchanged so existing callers, method expressions, and implementations keep
compiling.

Comment on lines +129 to +132
m, ok := f.Interface().(map[string]string)
if !ok {
panic(wrongMapType)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid panic in query encoding for unsupported map types.

This introduces a hard crash path in SDK request building. Since EncodeReqQuery returns error, this should fail gracefully instead of panicking.

Proposed fix
-			m, ok := f.Interface().(map[string]string)
-			if !ok {
-				panic(wrongMapType)
-			}
+			m, ok := f.Interface().(map[string]string)
+			if !ok {
+				return TypeErr{
+					Data: tf.Name,
+					Type: "map[string]string",
+				}
+			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/codec/codec.go` around lines 129 - 132, In EncodeReqQuery, remove
the panic triggered when f.Interface() does not assert to map[string]string (the
wrongMapType panic) and instead return a descriptive error from EncodeReqQuery;
locate the branch around the m, ok := f.Interface().(map[string]string) check
and on !ok return fmt.Errorf or a wrapped error that includes the actual type
(use reflect.TypeOf(f.Interface())) and mention unsupported map type so the
caller can handle the failure gracefully.

Comment on lines +33 to +67
// AuthError represents an authentication failure (HTTP 401).
type AuthError struct{ APIError }

type throttleErr struct {
func (e AuthError) Error() string {
return fmt.Sprintf("uploadcare: authentication failed: %s", e.Detail)
}

// ThrottleError represents a throttled request (HTTP 429).
type ThrottleError struct {
RetryAfter int
}

func (e throttleErr) Error() string {
func (e ThrottleError) Error() string {
if e.RetryAfter == 0 {
return "Request was throttled."
return "uploadcare: request throttled"
}
return fmt.Sprintf(
"Request was throttled. Expected available in %d second",
"uploadcare: request throttled, retry after %d seconds",
e.RetryAfter,
)
}

type reqValidationErr struct{ respErr }
// ValidationError represents a request validation failure (HTTP 400).
type ValidationError struct{ APIError }

func (e reqValidationErr) Error() string {
return fmt.Sprintf("Request parameters validation error: %s", e.Details)
func (e ValidationError) Error() string {
return fmt.Sprintf("uploadcare: validation error: %s", e.Detail)
}

type reqForbiddenErr struct{ respErr }
// ForbiddenError represents a forbidden request (HTTP 403).
type ForbiddenError struct{ APIError }

func (e ForbiddenError) Error() string {
return fmt.Sprintf("uploadcare: forbidden: %s", e.Detail)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In Go, does errors.Asmatch an embedded error field without anUnwrapmethod? For a wrapper liketype AuthError struct{ APIError }, what is the idiomatic way to preserve errors.As(err, &apiErr) behavior?

💡 Result:

No. errors.As only walks the error tree formed by Unwrap() error / Unwrap() []error (and any custom As(any) bool matcher). It does not look inside arbitrary fields (embedded or not) unless they’re reachable via Unwrap (or matched via a custom As). [1]

So for:

type AuthError struct{ APIError } // APIError has Error() but no Unwrap()

errors.As(err, &apiErr) will not find the embedded APIError just because it’s embedded.

Idiomatic way to preserve errors.As(err, &apiErr) is to make AuthError a real wrapper by providing Unwrap (or by constructing it with %w):

type AuthError struct{ APIError }

func (e AuthError) Unwrap() error { return e.APIError } // or &e.APIError if you store a pointer

This is the standard pattern for wrapper types that want errors.Is/As to see the underlying error. [2][1]

Alternative (less common): implement As(any) bool on AuthError to “pretend” it’s an *APIError, but Unwrap is the usual/idiomatic approach. [1]

Sources:
[1] Go errors.As docs (error “tree”, Unwrap, custom As) — pkg.go.dev/errors
[2] Go 1.13 error wrapping article (wrapper types should add Unwrap) — go.dev/blog/go1.13-errors


Add Unwrap methods to enable error chain matching.

Embedding APIError alone does not make it accessible to errors.As. In Go, error chains are built only via Unwrap() methods (or custom As implementations). Without Unwrap, callers cannot use errors.As(err, &apiErr) to match through AuthError, ValidationError, or ForbiddenError. Additionally, ThrottleError should embed APIError to preserve StatusCode and Detail.

Add `Unwrap` methods to all error types
 type AuthError struct{ APIError }
+func (e AuthError) Unwrap() error { return e.APIError }

 type ThrottleError struct {
+	APIError
 	RetryAfter int
 }
+func (e ThrottleError) Unwrap() error { return e.APIError }

 type ValidationError struct{ APIError }
+func (e ValidationError) Unwrap() error { return e.APIError }

 type ForbiddenError struct{ APIError }
+func (e ForbiddenError) Unwrap() error { return e.APIError }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// AuthError represents an authentication failure (HTTP 401).
type AuthError struct{ APIError }
type throttleErr struct {
func (e AuthError) Error() string {
return fmt.Sprintf("uploadcare: authentication failed: %s", e.Detail)
}
// ThrottleError represents a throttled request (HTTP 429).
type ThrottleError struct {
RetryAfter int
}
func (e throttleErr) Error() string {
func (e ThrottleError) Error() string {
if e.RetryAfter == 0 {
return "Request was throttled."
return "uploadcare: request throttled"
}
return fmt.Sprintf(
"Request was throttled. Expected available in %d second",
"uploadcare: request throttled, retry after %d seconds",
e.RetryAfter,
)
}
type reqValidationErr struct{ respErr }
// ValidationError represents a request validation failure (HTTP 400).
type ValidationError struct{ APIError }
func (e reqValidationErr) Error() string {
return fmt.Sprintf("Request parameters validation error: %s", e.Details)
func (e ValidationError) Error() string {
return fmt.Sprintf("uploadcare: validation error: %s", e.Detail)
}
type reqForbiddenErr struct{ respErr }
// ForbiddenError represents a forbidden request (HTTP 403).
type ForbiddenError struct{ APIError }
func (e ForbiddenError) Error() string {
return fmt.Sprintf("uploadcare: forbidden: %s", e.Detail)
}
// AuthError represents an authentication failure (HTTP 401).
type AuthError struct{ APIError }
func (e AuthError) Unwrap() error { return e.APIError }
func (e AuthError) Error() string {
return fmt.Sprintf("uploadcare: authentication failed: %s", e.Detail)
}
// ThrottleError represents a throttled request (HTTP 429).
type ThrottleError struct {
APIError
RetryAfter int
}
func (e ThrottleError) Unwrap() error { return e.APIError }
func (e ThrottleError) Error() string {
if e.RetryAfter == 0 {
return "uploadcare: request throttled"
}
return fmt.Sprintf(
"uploadcare: request throttled, retry after %d seconds",
e.RetryAfter,
)
}
// ValidationError represents a request validation failure (HTTP 400).
type ValidationError struct{ APIError }
func (e ValidationError) Unwrap() error { return e.APIError }
func (e ValidationError) Error() string {
return fmt.Sprintf("uploadcare: validation error: %s", e.Detail)
}
// ForbiddenError represents a forbidden request (HTTP 403).
type ForbiddenError struct{ APIError }
func (e ForbiddenError) Unwrap() error { return e.APIError }
func (e ForbiddenError) Error() string {
return fmt.Sprintf("uploadcare: forbidden: %s", e.Detail)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ucare/error.go` around lines 33 - 67, Add Unwrap methods so callers can use
errors.As to access the embedded APIError: embed APIError into ThrottleError
(change ThrottleError to include APIError) and implement Unwrap on AuthError,
ValidationError, ForbiddenError and ThrottleError (use the same receiver style
as their Error methods) to return the embedded APIError as an error; this
exposes StatusCode/Detail for errors.As and preserves the existing Error()
behavior.

Comment on lines 100 to 121
func (c *restAPIClient) Do(req *http.Request, resdata interface{}) error {
tries := 0
for tries := 1; ; tries++ {
if tries > 1 && req.GetBody != nil {
var err error
req.Body, err = req.GetBody()
if err != nil {
return err
}
}

try:
tries++
log.Debugf("making %d request: %+v", tries, req)

if tries > 1 && req.GetBody != nil {
var err error
req.Body, err = req.GetBody()
resp, err := c.conn.Do(req)
if err != nil {
return err
}
}

log.Debugf("making %d request: %+v", tries, req)

resp, err := c.conn.Do(req)
if err != nil {
return err
}
if req.Body != nil {
defer req.Body.Close()
retry, err := c.handleResponse(resp, req, resdata, tries)
if err != nil || !retry {
return err
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In Go's net/http package, what does the documentation say about reusing an http.Request with a non-nil Body after Client.Do, and when is Request.GetBody required for retries or redirects?

💡 Result:

  • Reusing an http.Request with a non-nil Body after Client.Do: the net/http docs state that the request Body, if non-nil, will be closed by the underlying Transport (even on errors), and it may be closed asynchronously after Do returns—so you should not expect to be able to reuse the same Request.Body after Do without replacing/resetting it. [1]

  • When Request.GetBody is required (retries / redirects):

    • Request.GetBody is an optional function that returns a fresh copy of Body, and it is used for client requests when a redirect requires reading the body more than once (you still must set Body too). [2]
    • In Client.Do, 307/308 redirects preserve the original method and body only if Request.GetBody is defined (whereas 301/302/303 switch to GET/HEAD with no body). [1]
    • For retries you implement yourself, GetBody is effectively required any time the request body must be sent again (because the original Body will have been consumed/closed); the standard library explicitly documents this “read more than once” requirement in the redirect context. [2]

Sources:
[1] Go net/http docs — Client.Do (redirect behavior; request body closed, possibly asynchronously) (pkg.go.dev)
[2] Go net/http docs — Request.GetBody field description (used when body must be read more than once; still requires setting Body) (pkg.go.dev)

Citations:


Don't retry request bodies without GetBody replay support.

Client.Do is public and can receive custom requests with a body but no GetBody function defined. Go's net/http.Transport closes the request Body after each Do call, so retrying the same request on a 429 without resetting the body will attempt to send an already-closed/consumed body. Check that GetBody is available before retrying; otherwise, fail immediately.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ucare/restapi.go` around lines 100 - 121, In restAPIClient.Do, the loop can
attempt retries even when the request has a Body but no GetBody, causing replay
of a closed/consumed body; modify the retry logic in Do (and its interaction
with handleResponse) so that if handleResponse requests a retry but req.GetBody
is nil (and req.Body != nil), Do returns an error immediately instead of
looping; also ensure the existing reset code that calls req.GetBody to reassign
req.Body only runs when req.GetBody != nil and that the error returned explains
the lack of GetBody support for retrying the request.

Comment on lines 135 to +155
case 400, 404:
var err respErr
if e := json.NewDecoder(resp.Body).Decode(&err); e != nil {
return e
var apiErr APIError
if e := json.NewDecoder(resp.Body).Decode(&apiErr); e != nil {
return false, e
}
resp.Body.Close()
return err
apiErr.StatusCode = resp.StatusCode
return false, apiErr
case 401:
var err authErr
if e := json.NewDecoder(resp.Body).Decode(&err); e != nil {
return e
var authErr AuthError
if e := json.NewDecoder(resp.Body).Decode(&authErr); e != nil {
return false, e
}
authErr.StatusCode = 401
return false, authErr
case 403:
var forbiddenErr ForbiddenError
if e := json.NewDecoder(resp.Body).Decode(&forbiddenErr); e != nil {
return false, e
}
resp.Body.Close()
return err
forbiddenErr.StatusCode = 403
return false, forbiddenErr
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep structured HTTP errors even when the body is empty or plain text.

These branches return the JSON decode error directly for 400/401/403/404, and the generic >=400 path falls back to StatusText instead of the raw body. That means callers can see EOF or lose the server's actual message rather than getting APIError/AuthError/ForbiddenError with a populated StatusCode. A read-once fallback to trimmed body text would keep the new error contract intact.

Also applies to: 185-191

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ucare/restapi.go` around lines 135 - 155, The error-handling branches that
decode JSON into APIError/AuthError/ForbiddenError (the switch on
resp.StatusCode handling 400/404, 401, 403) currently return the JSON decode
error directly, which loses the structured error when the body is empty or plain
text; change each branch to attempt json.Decode into the respective struct
(APIError, AuthError, ForbiddenError), and if decode fails, read resp.Body once
(io.ReadAll), trim the text, set that trimmed string into the struct's
Message/Detail field (or appropriate field), ensure StatusCode is set
(resp.StatusCode or 401/403), and return the populated struct; apply the same
fallback behavior used in the generic >=400 path (lines ~185-191) so callers
always receive a structured error object even when JSON decoding fails.

Comment on lines +195 to +218
if isNilResponseData(resdata) {
return false, nil
}

err = json.NewDecoder(resp.Body).Decode(&resdata)
if err != nil {
return err
if err := json.NewDecoder(resp.Body).Decode(resdata); err != nil {
return false, err
}
resp.Body.Close()

return nil
return false, nil
}

func isNilResponseData(resdata interface{}) bool {
if resdata == nil {
return true
}

v := reflect.ValueOf(resdata)
switch v.Kind() {
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map,
reflect.Ptr, reflect.Slice:
return v.IsNil()
default:
return false
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Only literal nil should mean “skip decoding”.

isNilResponseData currently treats typed nil maps/slices/pointers as “no destination”, so client.Do(req, out) can return nil on a 2xx response while dropping the payload entirely. That silently masks the same caller error the upload client now exposes.

🧩 Proposed fix
 func isNilResponseData(resdata interface{}) bool {
-	if resdata == nil {
-		return true
-	}
-
-	v := reflect.ValueOf(resdata)
-	switch v.Kind() {
-	case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map,
-		reflect.Ptr, reflect.Slice:
-		return v.IsNil()
-	default:
-		return false
-	}
+	return resdata == nil
 }
// remove this import once the helper no longer uses reflection
import "reflect"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ucare/restapi.go` around lines 195 - 218, The helper is incorrectly treating
typed nils as "no destination", causing responses to be discarded; update
isNilResponseData to return true only when resdata == nil (remove the
reflect-based Kind/IsNil checks and the reflect import) so that typed nil
pointers/slices/maps still allow decoding to occur (adjust any tests if needed);
ensure the Do flow that calls isNilResponseData (the caller around
json.NewDecoder(resp.Body).Decode(resdata)) will attempt Decode when resdata is
a typed nil.

Comment on lines +37 to +41
func (s service) Upload(ctx context.Context, params UploadParams) (FileInfo, error) {
threshold := DefaultMultipartThreshold
if params.MultipartThreshold != nil {
threshold = *params.MultipartThreshold
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reject negative Size values before threshold selection.

Size < 0 currently falls through to the direct-upload branch because params.Size > threshold is false. That silently disables multipart selection for invalid input and can route a large payload to the wrong endpoint.

🛡️ Proposed fix
 func (s service) Upload(ctx context.Context, params UploadParams) (FileInfo, error) {
+	if params.Size < 0 {
+		return FileInfo{}, errors.New("upload: size must be non-negative")
+	}
+
 	threshold := DefaultMultipartThreshold
// add alongside the existing imports
import "errors"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@upload/upload.go` around lines 37 - 41, The Upload method currently lets
params.Size < 0 fall through to direct upload; validate and reject negative Size
at the start of Upload (before computing threshold) by returning an error (use
the errors package) when params.Size is negative. Update the Upload function to
check params.Size < 0 and return a clear error (e.g., "invalid size: must be
non-negative") so multipart selection logic that uses MultipartThreshold and
DefaultMultipartThreshold is not bypassed; reference the Upload method and
UploadParams.Size/MultipartThreshold symbols when adding the check.

Comment on lines +58 to +69
func (s service) uploadDirect(ctx context.Context, params UploadParams) (FileInfo, error) {
id, err := s.File(ctx, FileParams{
Data: params.Data,
Name: params.Name,
ContentType: params.ContentType,
ToStore: params.ToStore,
Metadata: params.Metadata,
})
if err != nil {
return FileInfo{}, err
}
return s.FileInfo(ctx, id)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't hide a successful direct upload behind the follow-up lookup.

After s.File() returns, the file already exists. If the immediate s.FileInfo() call then fails, Upload surfaces only that read error and callers are likely to retry the whole operation, duplicating the upload. Please preserve the uploaded file ID on this path or avoid the extra lookup here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@upload/upload.go` around lines 58 - 69, The current uploadDirect calls
s.File(...) then immediately calls s.FileInfo(...) and if the lookup fails the
upload is reported as an error, which can cause duplicate uploads; change
uploadDirect to preserve and return the successful upload id when s.File
succeeds instead of hiding it behind the follow-up lookup: after s.File(...)
returns id, call s.FileInfo(...) only as an optional best-effort (e.g., if it
succeeds return that FileInfo), but if s.FileInfo fails return FileInfo with at
least the ID set (e.g., FileInfo{ID: id}) and nil error (or return the id
alongside a wrapped error that includes the id) so callers know the upload
succeeded; update function uploadDirect and any callers expecting full metadata
accordingly (references: uploadDirect, s.File, s.FileInfo, UploadParams,
FileInfo).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant