Feature/object storage: minio, cloudflare R2, blackbaze and rustfs#198
Feature/object storage: minio, cloudflare R2, blackbaze and rustfs#198
Conversation
- Add connection icons for MinIO, Cloudflare R2, Backblaze B2, and rustfs - Add comprehensive planning documentation for HTTPS object stores integration - Add object stores AI context documentation with provider-specific details - Add DuckDB HTTPS object stores context documentation for httpfs extension - Extend Cloud Explorer feature to support S3-compatible and self-hosted storage solutions - Enable integration with MinIO (self-hosted), Cloudflare R2 (zero egress), Backblaze B2 (cost-effective), and rustfs (lightweight) - Provide phased implementation plan with DuckDB secret configuration examples for each provider
- Add MinIO provider icon and integrate into connection icons index - Implement MinIO backend service with S3-compatible client configuration - Add MinIO IPC handlers for bucket listing, object browsing, and preview operations - Create MinIO connection form with endpoint, SSL, access key, and region fields - Integrate secure credential storage for MinIO Secret Access Key - Add MinIO-specific DuckDB secret configuration with custom endpoint support - Update type definitions to include MinIO provider across frontend and backend - Add comprehensive form validation and error handling for MinIO connections - Update implementation plan documentation with Phase 1 completion status and testing instructions - Enable path-style URL support and optional SSL/TLS configuration for self-hosted deployments
…e storage - Add Cloudflare R2 provider to cloud storage connections with auto-generated endpoint from Account ID - Implement R2-specific connection form fields (accountId, accessKeyId, secretAccessKey, jurisdiction) - Add native DuckDB R2 secret type configuration with 'auto' region support - Integrate Cloudflare R2 icon asset and provider icon mapping - Add secure credential storage for R2 Secret Access Key - Implement backend service with S3-compatible client for R2 operations - Add IPC handlers for R2 bucket listing and object browsing - Add type definitions for CloudflareR2Config throughout stack - Support optional EU jurisdiction configuration for compliance - Update implementation plan documentation with Phase 2 completion status - Add comprehensive testing instructions and validation error handling
…ct storage - Add Backblaze B2 provider icon and integration to cloud storage images - Implement backend service with S3-compatible client and dynamic region extraction - Add IPC handlers for B2 cloud operations (list buckets, objects, preview) - Create frontend connection form with B2-specific credential fields - Integrate secure credential storage for B2 authentication - Add DuckDB S3 secret configuration with automatic region detection - Support multi-region endpoints (US West and EU Central) - Implement explicit v4 signature configuration for B2 API compatibility - Add comprehensive form validation and error handling - Update documentation with Phase 3 completion status and implementation details - Add type definitions for BackblazeB2Config throughout the stack - Production tested with European B2 bucket for multi-region support
…rage - Add rustfs icon to connection icons index - Implement RustfsConfig interface in type system - Add rustfs provider to backend cloud authentication helper with DuckDB secret configuration - Implement rustfs methods in backend cloud explorer service (create, list, download, test connection) - Add rustfs provider card and form fields to ConnectionForm UI component - Update ExplorerBuckets and ExplorerBucketContent to handle rustfs credentials - Implement secure credential storage for rustfs in useSecureStorage hook - Add rustfs URL generation in cloud URL helper - Update frontend cloud explorer service with rustfs support - Update implementation documentation marking Phase 4 as complete - Completes all 7 providers: AWS S3, Azure Blob, GCS, MinIO, Cloudflare R2, Backblaze B2, and rustfs
- Add ViewToggle component for switching between list and card views - Implement dual view support in ExplorerBuckets with list and card layouts - Implement dual view support in ExplorerBucketContent with list and card layouts - Add localStorage persistence for view preferences per component - Add bucket-blue.png icon asset for UI enhancements - Update ConnectionForm, ExplorerDashboard, and related components for view toggle integration - Update cloudAuth helper and cloud services to support enhanced UI - Add comprehensive planning documentation for cloud explorer UI enhancement - Update frontend types to support view state management - Improve information density with list view as default while maintaining card view option
- Remove debug console.log statements from cloudAuth.helper.ts for MinIO, R2, B2, and rustfs secret building - Remove debug console.log statements from cloudExplorer.service.ts for S3 client initialization and error handling - Remove debug console.log statements from ConnectionForm.tsx during connection testing - Remove debug console.log statements from ExplorerBuckets.tsx - Clean up codebase by eliminating verbose logging that was used during development and testing
📝 WalkthroughWalkthroughThis pull request extends cloud storage support by integrating four new S3-compatible providers (MinIO, Cloudflare R2, Backblaze B2, RustFS) across backend services, IPC handlers, secure storage, and frontend components. Updates include new provider configuration types, helper methods for secret management and credential escaping, an expanded explorer UI with list/card view modes, and enhanced form fields for provider-specific settings. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant Form as ConnectionForm
participant Storage as SecureStorage
participant IPC as IPC Handler
participant Auth as CloudAuth
participant Service as CloudService
participant Provider as Cloud Provider
User->>Form: Fill provider config (e.g., MinIO)
Form->>Form: Validate required fields
User->>Form: Click "Test Connection"
Form->>IPC: testConnection(provider, config)
IPC->>Auth: Generate provider-specific secret
Auth->>Storage: Store secret (minio_secret, r2_secret, etc.)
Auth->>Service: Create S3 client with config
Service->>Provider: Attempt connection/list buckets
Provider-->>Service: Success or error
Service-->>IPC: Connection result
IPC-->>Form: Display result to user
User->>Form: Confirm & Save Connection
Form->>Storage: Persist provider secrets
Form->>IPC: Create connection
IPC-->>Form: Connection created
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/renderer/components/cloudExplorer/ExplorerBucketContent.tsx (1)
107-150:⚠️ Potential issue | 🟠 MajorMissing credentials silently replaced with empty strings — differs from ExplorerBuckets behavior.
In
ExplorerBuckets.tsx, when a secret returnsnull, it setscredentialsMissing = trueand shows a user-facing alert prompting the user to edit the connection. Here,nullsecrets are replaced with''(e.g., line 118:secret || ''), causing the component to proceed with empty credentials. This will result in cryptic backend errors instead of a clear "credentials missing" message.Consider matching the
ExplorerBucketspattern: trackcredentialsMissingstate and show a similar alert.src/main/ipcHandlers/cloudExplorer.ipcHandlers.ts (1)
133-173:⚠️ Potential issue | 🟠 MajorCredentials are leaked in error messages via
JSON.stringify(config).Lines 141, 152, 163, and 170 serialize the entire config object (including
secretAccessKey,applicationKey, etc.) into the error message. These errors may surface in the renderer UI, logs, or error reporting, exposing secrets.Strip sensitive fields before including config in error messages, or only log the field names that are missing.
🔒 Proposed fix (example for AWS block)
if ( !s3Config.region || !s3Config.accessKeyId || !s3Config.secretAccessKey ) { throw new Error( - `Invalid AWS config: missing required fields. Received: ${JSON.stringify(s3Config)}`, + 'Invalid AWS config: missing required fields (region, accessKeyId, secretAccessKey).', ); }Apply the same pattern to the MinIO, Cloudflare R2, and Backblaze B2 blocks.
🤖 Fix all issues with AI agents
In `@docs/20-plan-new-https-object-stores.md`:
- Around line 1074-1077: Replace the hardcoded Cloudflare Account ID string
"1317e314d442cf798184588f7ba4866a" in the AWS CLI example (the --endpoint-url
value inside the bash snippet) with a non-sensitive placeholder like
"<CLOUDFLARE_ACCOUNT_ID>" or "<R2_ACCOUNT_ID>" so the docs no longer expose a
real account identifier; update the example command in the
docs/20-plan-new-https-object-stores.md snippet accordingly.
In `@src/main/helpers/cloudAuth.helper.ts`:
- Around line 129-209: The SQL strings build in cloudAuth.helper.ts (inside the
CREATE OR REPLACE SECRET blocks using dropSecretsQuery) interpolate
user-controlled fields (e.g., minioConfig.accessKeyId,
minioConfig.secretAccessKey, minioConfig.endpoint, r2Config.accountId,
b2Config.applicationKeyId/applicationKey, rustfsConfig.* and region/endpoint
fields) and must be sanitized to prevent quote-breaking or injection; add a
small helper (e.g., escapeSingleQuotes or escapeSqlString) that replaces single
quotes with escaped single quotes and apply it to every interpolated config
value used in the secret creation templates (refer to the minio, cloudflare-r2,
backblaze-b2, rustfs cases and also update existing aws/gcs/azure cases) so all
KEY_ID, SECRET, REGION, ENDPOINT, ACCOUNT_ID, etc. are escaped before being
embedded into the template strings.
In `@src/main/ipcHandlers/cloudExplorer.ipcHandlers.ts`:
- Around line 30-36: The provider union typed inline for the `provider` field in
cloudExplorer IPC handler types is missing 'rustfs' and is duplicated across the
file; import and reuse the canonical CloudProvider type instead of the inline
union (replace the `provider: | 'aws' | 'azure' | ...` declarations with
`provider: CloudProvider`) and update all occurrences (every `provider` union in
this file) so rustfs calls are handled consistently by the IPC handlers.
In `@src/main/services/cloudExplorer.service.ts`:
- Around line 1257-1265: Replace the repeated inline provider union with the
shared CloudProvider type: import CloudProvider from the file where it's
exported (frontend.ts) and update the method signatures for listBuckets,
listObjects, getDownloadUrl, and testConnection to use CloudProvider instead of
the inline union; ensure any other occurrences (the ranges mentioned around
lines 1289–1296, 1358–1365, 1419–1426) are updated too so the set of providers
(including 'rustfs') stays in sync across the codebase.
- Around line 706-711: The validation for Cloudflare R2 Account IDs incorrectly
allows 16–64 alphanumeric chars; update the check around config.accountId in
cloudExplorer.service.ts (the block that throws the Error) to require exactly 32
alphanumeric characters and adjust the error text accordingly. Replace the
current regex test on config.accountId with one that enforces 32 characters
(alphanumeric only) and make the thrown Error message state "Account ID must be
exactly 32 alphanumeric characters" so the behavior and message align with
Cloudflare's API requirements.
In `@src/main/services/cloudPreview.service.ts`:
- Around line 48-49: The console.log call that prints the full secretQuery (in
cloudPreview.service.ts where buildCloudSecretQuery produces secretQuery) leaks
credentials; remove that console.log or replace it with logging that redacts
sensitive values (e.g., mask access_key/secret_key tokens before logging) so the
CREATE SECRET statement is never emitted with raw keys; update the code that
references secretQuery to either not log at all or to sanitize/mask the
sensitive fields prior to any logging or telemetry.
In `@src/renderer/services/cloudExplorer.service.ts`:
- Around line 10-18: Replace the repeated string union type with the shared
CloudProvider type: update listBuckets and the other four methods in
cloudExplorer.service.ts to use CloudProvider for the provider parameter and any
generic constraints (replace occurrences of "'aws' | 'azure' | 'gcs' | 'minio' |
'cloudflare-r2' | 'backblaze-b2' | 'rustfs'"); add the appropriate import for
the CloudProvider type at the top of the file and ensure all 10 locations
(parameter types and generics) reference CloudProvider instead of the inline
union.
🟡 Minor comments (15)
docs/duckdb-https-object-stores-ai-context.md-39-136 (1)
39-136:⚠️ Potential issue | 🟡 MinorDuplicate content: HTTP(S) Support section appears twice verbatim.
The "HTTP(S) Support" section (Lines 39–136) is repeated almost identically at Lines 172–269. This inflates the file and creates a maintenance burden — any update must be applied in both places. Remove the duplicate section.
Also applies to: 172-269
docs/20a-plan-cloud-object-explorer-enhance-ui.md-34-34 (1)
34-34:⚠️ Potential issue | 🟡 MinorTypo in screenshot filename: "bukctes" → "buckets".
minio-bukctes.pngappears to be misspelled. If the actual file on disk matches this typo, rename the file as well; otherwise the reference will be a broken link.-- `dbt-studio/docs/minio-bukctes.png` - Target buckets list view +- `dbt-studio/docs/minio-buckets.png` - Target buckets list viewassets/connectionIcons/index.ts-18-18 (1)
18-18:⚠️ Potential issue | 🟡 MinorTypo in asset filename:
blackbaze.png→backblaze.png.The import references
./blackbaze.pngwhich misspells "backblaze". Rename the actual file on disk and update this import to avoid confusion.-import backblazeB2 from './blackbaze.png'; +import backblazeB2 from './backblaze_b2.png';(Also note the pre-existing
s3_bucktet.pngon Line 15 — consider fixing that in a follow-up.)docs/20-plan-new-https-object-stores.md-581-590 (1)
581-590:⚠️ Potential issue | 🟡 MinorPhase 3 success criteria checkboxes are unchecked despite "✅ COMPLETE" status.
The Phase 3 section header (line 218) and progress table (line 652) both claim Backblaze B2 is complete, but the success criteria here still show all items unchecked (
[ ]). This inconsistency is confusing for anyone referencing this document.src/renderer/components/cloudExplorer/ExplorerBuckets.tsx-187-194 (1)
187-194:⚠️ Potential issue | 🟡 MinorSortable "Size" column has no effect — misleading UX.
The
sizesort case explicitly setscomparison = 0, so clicking the Size column header does nothing. YetrenderListView(line 262–268) still renders a clickableTableSortLabelfor it. Either implement size sorting (if bucket size data is available) or remove the sort label from that column to avoid confusing users.docs/20-plan-new-https-object-stores.md-635-635 (1)
635-635:⚠️ Potential issue | 🟡 MinorTruncated sentence.
The text ends mid-word: "DuckDB connection pooling ensures resour". Complete the sentence.
docs/20-plan-new-https-object-stores.md-1370-1395 (1)
1370-1395:⚠️ Potential issue | 🟡 MinorPhase 4 (rustfs) heading is also duplicated.
Lines 1370 and 1383 both contain "Phase 4: rustfs Integration (PENDING)" headers with slightly different content, flagged by markdownlint as well. Merge these.
docs/20-plan-new-https-object-stores.md-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorTypo and LLM-directed instruction visible in published documentation.
Line 3 contains a directive aimed at an LLM ("Do not generate .md ai context docs on your own, you are LLm…") and a spelling error ("genreate"). This instruction should not appear in published project documentation.
src/renderer/components/cloudExplorer/ExplorerBuckets.tsx-496-533 (1)
496-533:⚠️ Potential issue | 🟡 MinorLoading skeleton always renders as a table, even when card view is selected.
The loading state unconditionally renders a table skeleton. When the user has selected card view, this creates a visual mismatch — they'll see a table skeleton that abruptly switches to a card grid once loading completes. Consider rendering a card-style skeleton when
view === 'card'.docs/20-plan-new-https-object-stores.md-866-916 (1)
866-916:⚠️ Potential issue | 🟡 MinorPhase 2 is duplicated with conflicting information.
Phase 2 (Cloudflare R2) appears twice: once starting at line 866 and again at line 912. The first instance states "Actual Effort: 3 hours" while the second states "Actual Effort: 2 hours". Additionally, the first instance's checklist (lines 882–897) contains both checked and unchecked duplicates of the same items. Consolidate into a single section with consistent data.
docs/20-plan-new-https-object-stores.md-1398-1406 (1)
1398-1406:⚠️ Potential issue | 🟡 MinorTimeline section is stale — contradicts completed status.
This section still shows Phases 2–4 as "Ready to start" with estimated effort, yet all four phases are marked complete earlier in the document. Update to reflect actual completion dates and effort.
src/renderer/components/cloudExplorer/ConnectionForm.tsx-344-345 (1)
344-345:⚠️ Potential issue | 🟡 MinorHardcoded
[MinIO Frontend]in console error used by all providers.This error log fires for test connection failures from any provider, not just MinIO. Use the actual provider name:
- console.error('[MinIO Frontend] Test connection failed:', error); + console.error(`[${formData.provider}] Test connection failed:`, error);src/renderer/components/cloudExplorer/ConnectionForm.tsx-546-553 (1)
546-553:⚠️ Potential issue | 🟡 Minor
handleChangeis typedstringbut receives abooleanviaas any.
handleChangehas the signature(field: keyof FormData, value: string), but the SSL checkbox passese.target.checked(a boolean) cast throughas anyon lines 551 and 757. This silently bypasses type safety. TheuseSSLfield is typed asbooleaninFormData, sohandleChangeshould acceptstring | boolean, or use a separate handler for boolean fields.♻️ Proposed fix
- const handleChange = (field: keyof FormData, value: string) => { + const handleChange = (field: keyof FormData, value: string | boolean) => { setFormData((prev) => ({ ...prev, [field]: value })); };Then remove the
as anycasts:- onChange={(e) => handleChange('useSSL', e.target.checked as any)} + onChange={(e) => handleChange('useSSL', e.target.checked)}src/main/services/cloudExplorer.service.ts-888-909 (1)
888-909:⚠️ Potential issue | 🟡 Minor
createB2Clientisstaticbut should beprivate static.All other
create*Clientfactory methods in this class (createS3Client,createMinIOClient,createR2Client,createRustfsClient) areprivate static. This one is public, which exposes an internal implementation detail.🧹 Proposed fix
- static createB2Client(config: BackblazeB2Config): S3Client { + private static createB2Client(config: BackblazeB2Config): S3Client {src/main/services/cloudExplorer.service.ts-695-711 (1)
695-711:⚠️ Potential issue | 🟡 MinorDuplicate
accountIdcheck — the guard on line 695 makes lines 702–704 unreachable.The first check (
!config.accountId, line 695) already throws ifaccountIdis falsy. The second check on line 702 (!config.accountId || config.accountId.trim().length === 0) can never trigger because execution would have thrown on line 696 first. Remove the redundant block and keep only the regex validation.🧹 Proposed fix
if (!config.accountId) { throw new Error('Cloudflare R2 Account ID is required'); } - // Validate account ID format - // Cloudflare Account IDs are typically 32 characters, hexadecimal (lowercase) - // But can vary in length, so we'll be more flexible - if (!config.accountId || config.accountId.trim().length === 0) { - throw new Error('Cloudflare R2 Account ID is required'); - } - // Basic validation: should be alphanumeric and reasonable length (16-64 chars) if (!/^[a-zA-Z0-9]{16,64}$/.test(config.accountId)) {
🧹 Nitpick comments (11)
src/main/services/cloudPreview.service.ts (1)
30-37: Debug logging looks fine but consider guarding with a flag.Logging the endpoint and request metadata is useful during development. However, since this PR is WIP, ensure these
console.logcalls are either removed or gated behind a debug/verbose flag before merging todev, to avoid noisy logs in production builds.docs/object-stores-ai-context.md (1)
1-500: Clarify the purpose and sourcing of this document.This file appears to aggregate external documentation from Cloudflare, Backblaze, and RustFS into a single reference file. A few concerns:
- Licensing/attribution: If this content is copied from third-party docs, ensure it complies with the respective licenses and includes proper attribution.
- Staleness risk: Pasted documentation drifts from the source over time. Consider linking to the canonical sources instead, or adding a "last verified" note so maintainers know when to refresh it.
- Markdown consistency (from static analysis): Use consistent horizontal rule style (
---) instead of---------------(Line 227) and-------(Line 292).src/renderer/hooks/useSecureStorage.ts (1)
128-208: Consider a factory to reduce the repetitive set/get/delete boilerplate.Every cloud provider follows the exact same pattern — only the key prefix differs. A small helper would eliminate ~80 lines of near-identical code and make adding future providers trivial:
♻️ Example refactor
function cloudCredentialOps(prefix: string) { return { set: (secret: string, connectionName: string) => secureStorageService.set(`${prefix}-${connectionName}`, secret), get: (connectionName: string) => secureStorageService.get(`${prefix}-${connectionName}`), delete: (connectionName: string) => secureStorageService.delete(`${prefix}-${connectionName}`), }; } // Usage inside the hook: const cloudMinio = cloudCredentialOps('cloud-minio'); const cloudR2 = cloudCredentialOps('cloud-cloudflare-r2'); const cloudB2 = cloudCredentialOps('cloud-backblaze-b2'); const cloudRustfs = cloudCredentialOps('cloud-rustfs');This would also simplify the return object. The existing providers (GCS, AWS, Azure) could be migrated in a follow-up.
src/renderer/components/cloudExplorer/ExplorerBucketContent.tsx (1)
165-169: DuplicategetFileExtension— already exists insrc/renderer/utils/fileUtils.ts.A
getFileExtensionutility is already imported in this file viafileUtils(line 55 importsisPreviewSupportedfrom that module). The local re-implementation on lines 165–169 is functionally equivalent. Use the existing import instead.♻️ Proposed fix
Add
getFileExtensionto the existing import on line 55:-import { formatFileSize, isPreviewSupported } from '../../utils/fileUtils'; +import { formatFileSize, isPreviewSupported, getFileExtension } from '../../utils/fileUtils';Then remove lines 165–169.
src/renderer/components/cloudExplorer/ExplorerDashboard.tsx (1)
244-312: Consider data-driven rendering for the provider list to reduce repetition.Each provider block (lines 244–312) follows an identical pattern: check if the provider exists in
connectionTypes, render icon + count + label. This could be driven by an array to eliminate ~70 lines of near-identical JSX and make adding future providers trivial.♻️ Example refactor
+const PROVIDER_LABELS: Record<string, string> = { + aws: 'Amazon S3', + azure: 'Azure Blob Storage', + gcs: 'Google Cloud Storage', + minio: 'MinIO', + 'cloudflare-r2': 'Cloudflare R2', + 'backblaze-b2': 'Backblaze B2', + rustfs: 'rustfs', +}; + // In the render: -{connectionTypes.aws && ( - <Box sx={{ display: 'flex', alignItems: 'center', gap: 1 }}> - {getProviderIcon('aws')} - <Typography variant="body2" color="text.secondary"> - {connectionTypes.aws} Amazon S3 {connectionTypes.aws === 1 ? 'connection' : 'connections'} - </Typography> - </Box> -)} -{/* ... repeated for each provider ... */} +{Object.entries(connectionTypes).map(([provider, count]) => ( + <Box key={provider} sx={{ display: 'flex', alignItems: 'center', gap: 1 }}> + {getProviderIcon(provider as CloudProvider)} + <Typography variant="body2" color="text.secondary"> + {count} {PROVIDER_LABELS[provider] || provider}{' '} + {count === 1 ? 'connection' : 'connections'} + </Typography> + </Box> +))}src/renderer/components/cloudExplorer/ConnectionForm.tsx (2)
153-170: Initialization cascades config fields across unrelated providers.The
accessKeyIdfallback chain (lines 158–164) reads fromS3Config,MinIOConfig,CloudflareR2Config,BackblazeB2Config.applicationKeyId, andRustfsConfig— so editing a B2 connection will populateformData.accessKeyIdwith the B2applicationKeyId. While this doesn't cause a visible bug (B2 form usesformData.applicationKeyId), it leaves stale cross-provider data in the form state, which could surface unexpectedly if provider switching during editing is ever supported.A cleaner approach is to populate fields only for the matching provider in the
useEffectrather than cascading across all config types.
906-1248: Provider cards are highly repetitive — consider extracting a reusable component.All 7 provider cards (lines 906–1248) follow an identical structure:
Grid > Card > CardActionArea > Box > icon + label, differing only in provider key and display name. Extracting aProviderCardcomponent and mapping over a config array would eliminate ~300 lines of duplication and make adding future providers trivial.src/types/frontend.ts (1)
107-134:RustfsConfigis structurally identical toMinIOConfig.Both interfaces share the exact same fields (
endpoint,accessKeyId,secretAccessKey,useSSL?,region?). Consider defining a shared base interface or a type alias to reduce duplication and ensure future field additions stay in sync.♻️ Suggested approach
// Shared base for S3-compatible self-hosted providers export interface S3CompatibleConfig { endpoint: string; accessKeyId: string; secretAccessKey: string; useSSL?: boolean; region?: string; } export type MinIOConfig = S3CompatibleConfig; export type RustfsConfig = S3CompatibleConfig;If the types are expected to diverge later, keeping them separate is fine — just add a brief comment noting the intentional duplication.
src/main/ipcHandlers/cloudExplorer.ipcHandlers.ts (1)
127-173: Validation is inconsistent — missing forazure,gcs, andrustfs.The
testConnectionhandler validates required fields foraws,minio,cloudflare-r2, andbackblaze-b2, but skipsazure,gcs, andrustfs. If the intent is to catch misconfiguration early at the IPC boundary, the same treatment should apply to all providers. Alternatively, move all validation into the service layer (which already validates credentials in thecreate*Clientmethods) and remove the IPC-level validation to avoid duplication.src/main/services/cloudExplorer.service.ts (2)
831-840: Avoid logging full error details to console in connection tests.
testR2Connection(lines 834–840) andtestB2Connection(lines 1010–1016) log structured error details includingname,message,code, andstatusCodeto console. While this is server-side, these logs could contain sensitive context from the AWS SDK error objects (e.g., endpoint URLs with tokens). Consider gating verbose logging behind a debug flag or reducing what's logged.
510-684: Heavy duplication across S3-compatible providers — consider a shared implementation.The MinIO and RustFS implementations are virtually identical (both
listBuckets,listObjects,getDownloadUrl, andtestConnectionare copy-pasted). Cloudflare R2 and Backblaze B2 follow the same S3 SDK pattern with only the client creation differing.You could extract the S3-compatible operations into shared private methods that accept a pre-built
S3Client, keeping only thecreate*Clientfactories andtest*Connectionerror handling as provider-specific. This would eliminate ~400+ lines of duplication and make future bug fixes apply uniformly.♻️ Sketch of the refactored approach
// Shared S3-compatible operations private static async listS3CompatibleBuckets( client: S3Client, defaultLocation: string, ): Promise<Bucket[]> { const data = await client.send(new ListBucketsCommand({})); return (data.Buckets || []).map((bucket) => ({ name: bucket.Name!, created: bucket.CreationDate!, location: defaultLocation, })); } private static async listS3CompatibleObjects( client: S3Client, bucketName: string, continuationToken?: string, prefix?: string, ): Promise<CloudListResult> { // single implementation used by AWS, MinIO, R2, B2, RustFS ... } private static async getS3CompatibleDownloadUrl( client: S3Client, bucketName: string, objectKey: string, ): Promise<string> { const command = new GetObjectCommand({ Bucket: bucketName, Key: objectKey }); return getSignedUrl(client, command, { expiresIn: 3600 }); }Then each provider method becomes a thin wrapper:
static async listMinIOBuckets(config: MinIOConfig): Promise<Bucket[]> { const client = CloudExplorerService.createMinIOClient(config); return CloudExplorerService.listS3CompatibleBuckets(client, config.region || 'us-east-1'); }Also applies to: 1073-1254
- Add escapeSqlString() helper function to escape single quotes in SQL strings - Replace all unescaped template literals with escapeSqlString() calls in buildCloudSecretQuery() - Apply escaping to AWS, MinIO, R2, B2, RustFS, GCS, and Azure credential values - Update cloudAuth.helper.ts to sanitize accessKeyId, secretAccessKey, region, endpoint, and other sensitive configuration values - Refactor cloudExplorer.ipcHandlers.ts to use CloudProvider type instead of inline union type - Update documentation with placeholder for R2 account ID in example commands - Prevents SQL injection vulnerabilities when building cloud storage authentication secrets
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/main/ipcHandlers/cloudExplorer.ipcHandlers.ts (1)
109-159:⚠️ Potential issue | 🟠 MajorConfig validation error messages leak secrets.
JSON.stringify(minioConfig)(and similarly for all other providers) will includesecretAccessKey/applicationKeyin the thrown error message. These errors can surface in console logs, error dialogs, or crash reports. Log only non-sensitive field names to indicate what's missing.🛡️ Proposed fix (showing MinIO; apply same pattern to all providers)
} else if (provider === 'minio') { const minioConfig = config as any; if ( !minioConfig.endpoint || !minioConfig.accessKeyId || !minioConfig.secretAccessKey ) { throw new Error( - `Invalid MinIO config: missing required fields. Received: ${JSON.stringify(minioConfig)}`, + `Invalid MinIO config: missing required fields (endpoint, accessKeyId, or secretAccessKey).`, ); }
🤖 Fix all issues with AI agents
In `@docs/20-plan-new-https-object-stores.md`:
- Around line 1398-1406: The "## Timeline & Milestones" block is stale: update
or remove the Phase 2–4 lines under that heading (the lines listing "**Phase 2
(Cloudflare R2)**", "**Phase 3 (Backblaze B2)**", and "**Phase 4 (rustfs)**" and
the "**Estimated Total Time**"/"**Actual Time (Phase 1)**" lines) so the section
no longer contradicts the "All Phases Complete ✅" status referenced earlier;
either mark all phases as complete with final dates and adjusted totals, or
delete the entire "## Timeline & Milestones" block to avoid inconsistent status.
- Around line 1370-1395: Consolidate the duplicated "Phase 4: rustfs Integration
(PENDING)" sections into a single Phase 4 block by removing the redundant copy
and merging the key-features into one authoritative section (use the
S3-compatible, forcePathStyle / custom endpoint details as applicable), and
update the status from "PENDING"/"Awaiting Phase 3 completion" to reflect that
Phase 4 is Complete (e.g., "Status: Complete"); ensure the heading "Phase 4:
rustfs Integration" remains and that Region/authentication notes and
forcePathStyle configuration are accurate and not duplicated.
- Line 635: Replace the truncated fragment "DuckDB connection pooling ensures
resour" with a complete sentence; locate the phrase "DuckDB connection pooling
ensures resour" and change it to something like "DuckDB connection pooling
ensures resources are reused efficiently and prevents exhaustion under
concurrent workloads." Ensure the revised sentence reads naturally with the
surrounding paragraph.
- Around line 581-598: Under the headings "Phase 3: Backblaze B2" and "Phase 4:
rustfs" update the checklist items so they reflect the declared COMPLETE status:
change each unchecked item beginning with "- [ ]" (the Backblaze B2 items: B2
provider added to ConnectionForm dropdown; Connection form shows B2-specific
fields; Test connection validates B2 credentials; List buckets works with B2
endpoint; Browse objects within B2 buckets; Preview data files from B2 using
DuckDB S3 secret; B2 icon displays correctly in UI; Custom endpoint
configuration works) and the rustfs items (rustfs provider added to
ConnectionForm dropdown; Connection form shows rustfs-specific fields; Test
connection validates rustfs credentials; List buckets works with rustfs
endpoint; Browse objects within rustfs buckets; Preview data files from rustfs
using DuckDB S3 secret; rustfs icon displays correctly in UI) from "- [ ]" to "-
[x]".
- Around line 866-897: Remove the duplicated "Phase 2: Cloudflare R2 Integration
(COMPLETE)" block and merge the checklist so there is a single authoritative
section; keep the header "Phase 2: Cloudflare R2 Integration (COMPLETE)" and a
unified checklist that contains each task (e.g., "Add CloudflareR2Config
interface", "Add R2 methods to cloudExplorer.service.ts", "Add secure storage
methods setCloudR2Secret/getCloudR2Secret", etc.) only once, and reconcile the
"Actual Effort" field to the correct value (replace the conflicting "2 hours" or
"3 hours" with the verified effort) so the section shows one consistent
completion date, effort, status, and checklist.
- Line 1148: Replace the hard-coded Cloudflare account ID in the example
endpoint URL "https://1317e314d442cf798184588f7ba4866a.r2.cloudflarestorage.com"
with a placeholder to match the earlier AWS CLI example (use
"https://<R2_ACCOUNT_ID>.r2.cloudflarestorage.com"); update the single-line
endpoint example so it no longer exposes a real account ID and is consistent
with the placeholder used at line 1075.
In `@src/main/helpers/cloudAuth.helper.ts`:
- Around line 256-266: The connection string is being escaped twice:
escapeSqlString is applied to azureConfig.accountName and accountKey when
building connectionString and then again to the entire connectionString when
interpolating into the CREATE SECRET SQL; remove the double-escape by building
connectionString using the raw accountName/accountKey (use
azureConfig.accountName and azureConfig.accountKey directly) and keep the single
escape when embedding it via escapeSqlString(connectionString) in the CREATE OR
REPLACE SECRET azure_secret block (references: azureConfig, connectionString,
escapeSqlString, dropSecretsQuery, azure_secret).
In `@src/main/services/cloudExplorer.service.ts`:
- Around line 888-907: The method createB2Client is declared public (static)
while other provider factories (createS3Client, createMinIOClient,
createR2Client, createRustfsClient) are private static, leaking an
implementation detail; change the declaration of createB2Client to private
static so its visibility matches the other provider client factory methods and
update any internal usages to access it directly (no external callers should
rely on it).
🧹 Nitpick comments (3)
src/main/helpers/cloudAuth.helper.ts (1)
19-31:escapeSqlString— good addition; minor type signature mismatch.The implementation defensively handles
null(line 24) and non-string types (line 27–28), but the TypeScript signature only acceptsstring | undefined. This is fine as a runtime safety net, but consider widening the signature tounknown(or at leaststring | undefined | null) so the compiler doesn't silently accept incorrect calls that rely on the runtime fallback.src/main/services/cloudExplorer.service.ts (2)
696-709: RedundantaccountIdvalidation — checked twice.Lines 696–698 throw if
accountIdis falsy, then lines 702–704 throw again ifaccountIdis falsy or empty-after-trim. The first check is a subset of the second; remove the first to reduce noise.♻️ Proposed fix
- if (!config.accountId) { - throw new Error('Cloudflare R2 Account ID is required'); - } - - // Validate account ID format - // Cloudflare Account IDs are exactly 32 characters, alphanumeric - if (!config.accountId || config.accountId.trim().length === 0) { - throw new Error('Cloudflare R2 Account ID is required'); - } - - // Validation: must be exactly 32 alphanumeric characters - if (!/^[a-zA-Z0-9]{32}$/.test(config.accountId)) { + if ( + !config.accountId || + !/^[a-zA-Z0-9]{32}$/.test(config.accountId.trim()) + ) { throw new Error('Account ID must be exactly 32 alphanumeric characters'); }
511-685: Heavy code duplication between S3-compatible providers.MinIO and Rustfs implementations (
createClient,listBuckets,listObjects,getDownloadUrl) are near-identical — they only differ in error-message prefixes and one extraNetworkErrorcheck in Rustfs. The same pattern also largely repeats for B2 and the existing AWS S3 methods.Consider extracting a shared
S3CompatibleProviderhelper that takes anS3Client+ provider label and implements the common list/get/presign logic once, with provider-specific error mappers passed as callbacks. This would eliminate ~400 lines of duplicated code.Not a blocker for this PR, but worth tracking as a follow-up.
Also applies to: 1071-1252
…ction string - Remove escapeSqlString() calls from Azure account name and key in connection string builder - Use raw configuration values directly in DefaultEndpointsProtocol format - Change createB2Client() method visibility from public to private in CloudExplorerService - Azure connection strings should not have SQL-escaped values as they are passed to Azure SDK, not SQL queries
- Remove plan document for HTTPS object stores integration (MinIO, Cloudflare R2, Backblaze B2, rustfs) - Remove plan document for cloud object explorer UI enhancements - Remove DuckDB HTTPS object stores AI context documentation - Remove object stores AI context documentation - Clean up planning artifacts after feature implementation completion
…ooltip with ellipsis truncation for bucket names.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/renderer/components/cloudExplorer/ExplorerConnections.tsx (1)
65-76:⚠️ Potential issue | 🟡 Minor
getProviderName,getProviderColor, andrenderConnectionDetailsdon't handle the new providers.This PR adds
minio,cloudflare-r2,backblaze-b2, andrustfs, but these three switch statements still only covergcs/aws/azure. ThedefaultingetProviderNameproduces ugly names like"CLOUDFLARE-R2"and"BACKBLAZE-B2". All new providers get the same grey#666color chip and render no connection details.Suggested additions for getProviderName (similar pattern for the other two functions)
case 'azure': return 'Azure Blob Storage'; + case 'minio': + return 'MinIO'; + case 'cloudflare-r2': + return 'Cloudflare R2'; + case 'backblaze-b2': + return 'Backblaze B2'; + case 'rustfs': + return 'RustFS'; default: return (provider as string).toUpperCase();src/renderer/components/cloudExplorer/ExplorerBuckets.tsx (1)
94-162:⚠️ Potential issue | 🟡 Minor
useEffectdependency array is missing the secret-getter functions.The effect uses seven functions from
useSecureStorage()(lines 82-90:getCloudAwsSecret,getCloudAzureKey,getCloudGcsCredential,getCloudMinioSecret,getCloudR2Secret,getCloudB2Secret,getCloudRustfsSecret) but the dependency array on line 162 only lists[connection]. Since these functions are recreated on every render of the component (they're not wrapped inuseCallbackin the hook), React's exhaustive-deps lint rule will flag this as missing dependencies.Recommended fixes:
- Wrap the getter functions in
useCallbackwithinuseSecureStorage(), or- Add a
/* eslint-disable-next-line react-hooks/exhaustive-deps */comment with an explanation (e.g., "Functions are recreated each render but effect only depends on connection changes").
🤖 Fix all issues with AI agents
In `@src/renderer/components/cloudExplorer/ExplorerBuckets.tsx`:
- Around line 187-206: The Size sort is a no-op because case 'size' sets
comparison = 0; fix it by implementing a numeric comparison using the bucket
size field (e.g., use (a.size ?? 0) - (b.size ?? 0)) inside the switch case for
'size' so sizes sort ascending/descending according to sortOrder, and ensure the
component still uses the existing sortBy/sortOrder logic (refer to the sortBy
variable, the switch-case block, and sortOrder handling) so the TableSortLabel
for the Size column behaves correctly; alternatively, if you prefer
non-sortable, remove the TableSortLabel from the Size column instead of
implementing the numeric comparison.
🧹 Nitpick comments (2)
src/renderer/components/cloudExplorer/ExplorerBuckets.tsx (2)
314-318: Hardcoded "R/W" access value.Every bucket displays "R/W" regardless of actual permissions. If access info isn't available from the API, consider either removing this column or displaying a dash (
-) like the other unknown fields, to avoid implying a specific access level.
516-554: Loading skeleton always renders as a table, even in card view mode.When
viewis'card', users still see a table skeleton during loading. Consider rendering a grid ofSkeletoncards when in card view for a consistent experience.
| switch (sortBy) { | ||
| case 'name': | ||
| comparison = a.name.localeCompare(b.name); | ||
| break; | ||
| case 'size': | ||
| // Size not available, keep original order | ||
| comparison = 0; | ||
| break; | ||
| case 'created': { | ||
| const dateA = a.created ? new Date(a.created).getTime() : 0; | ||
| const dateB = b.created ? new Date(b.created).getTime() : 0; | ||
| comparison = dateA - dateB; | ||
| break; | ||
| } | ||
| default: | ||
| break; | ||
| } | ||
|
|
||
| return sortOrder === 'asc' ? comparison : -comparison; | ||
| }); |
There was a problem hiding this comment.
Sorting by size is a no-op — the column header is misleading.
case 'size' sets comparison = 0, so clicking the "Size" sort header toggles the arrow but never reorders. Either implement the sort or remove the TableSortLabel from the Size column to avoid confusing users.
Option A: implement size sort
case 'size':
- // Size not available, keep original order
- comparison = 0;
+ comparison = (a.size ?? 0) - (b.size ?? 0);
break;Option B: make Size column non-sortable
- <TableCell>
- <TableSortLabel
- active={sortBy === 'size'}
- direction={sortBy === 'size' ? sortOrder : 'asc'}
- onClick={() => handleSort('size')}
- >
- Size
- </TableSortLabel>
- </TableCell>
+ <TableCell>Size</TableCell>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| switch (sortBy) { | |
| case 'name': | |
| comparison = a.name.localeCompare(b.name); | |
| break; | |
| case 'size': | |
| // Size not available, keep original order | |
| comparison = 0; | |
| break; | |
| case 'created': { | |
| const dateA = a.created ? new Date(a.created).getTime() : 0; | |
| const dateB = b.created ? new Date(b.created).getTime() : 0; | |
| comparison = dateA - dateB; | |
| break; | |
| } | |
| default: | |
| break; | |
| } | |
| return sortOrder === 'asc' ? comparison : -comparison; | |
| }); | |
| switch (sortBy) { | |
| case 'name': | |
| comparison = a.name.localeCompare(b.name); | |
| break; | |
| case 'size': | |
| comparison = (a.size ?? 0) - (b.size ?? 0); | |
| break; | |
| case 'created': { | |
| const dateA = a.created ? new Date(a.created).getTime() : 0; | |
| const dateB = b.created ? new Date(b.created).getTime() : 0; | |
| comparison = dateA - dateB; | |
| break; | |
| } | |
| default: | |
| break; | |
| } | |
| return sortOrder === 'asc' ? comparison : -comparison; | |
| }); |
🤖 Prompt for AI Agents
In `@src/renderer/components/cloudExplorer/ExplorerBuckets.tsx` around lines 187 -
206, The Size sort is a no-op because case 'size' sets comparison = 0; fix it by
implementing a numeric comparison using the bucket size field (e.g., use (a.size
?? 0) - (b.size ?? 0)) inside the switch case for 'size' so sizes sort
ascending/descending according to sortOrder, and ensure the component still uses
the existing sortBy/sortOrder logic (refer to the sortBy variable, the
switch-case block, and sortOrder handling) so the TableSortLabel for the Size
column behaves correctly; alternatively, if you prefer non-sortable, remove the
TableSortLabel from the Size column instead of implementing the numeric
comparison.
Summary by CodeRabbit
New Features
UI/UX Improvements