diff --git a/CLAUDE.md b/CLAUDE.md index 8ed8245..9bec2fb 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -23,9 +23,49 @@ All HLS, M3U8, and DASH downloads are processed by **FFmpeg.wasm** running insid > **Planned**: Migrate FFmpeg.wasm to [mediabunny](https://github.com/nicktindall/mediabunny) for native-speed muxing without the 2 GB constraint. -## Cloud Upload (Planned) +## Cloud Upload -The code infrastructure for Google Drive uploads exists in `src/core/cloud/` (`GoogleAuth`, `GoogleDriveClient`, `UploadManager`) and the `uploadToDrive` flag is plumbed through `DownloadManager`, but **no actual upload is ever triggered** — `this.uploadToDrive` is stored but never used in `download()`. Do not document this as a working feature. Future work will wire this up and add support for additional providers (S3, Dropbox, etc.). +`src/core/cloud/` contains the full provider abstraction: + +- `base-cloud-provider.ts` — abstract `BaseCloudProvider` with `id: CloudProvider` and `upload(blob, filename, onProgress?, signal?): Promise` +- `google-auth.ts` — `GoogleAuth` static class; OAuth via `chrome.identity.launchWebAuthFlow()` with user-provided client ID (stored in `chrome.storage.local` under `google_client_id`). No `oauth2` manifest key needed. +- `google-drive.ts` — `GoogleDriveClient extends BaseCloudProvider`; simple multipart for files ≤ 5 MB, resumable chunked upload for larger files +- `s3-client.ts` — `S3Client extends BaseCloudProvider`; SigV4-signed PUT for files < 10 MB, multipart for ≥ 10 MB. Persists in-flight multipart `uploadId` to `chrome.storage.local` (`s3_pending_uploads`) for crash-resilient cleanup. +- `upload-manager.ts` — `Map` registry; routes `uploadBlob()` through `client.upload()`; `isConfigured()` checks `providers.size > 0` + +### Upload Flow + +Upload is **manual only**. Completed downloads show an "Upload to cloud" action in both the History page (Options → History → item menu) and the popup Downloads tab. Clicking it opens a file picker (`showOpenFilePicker`); the user selects the local video file, which is stored in IDB (since `chrome.runtime.sendMessage` uses JSON serialization that destroys `ArrayBuffer`), then an `UPLOAD_REQUEST` message is sent to the service worker. If both providers are configured, a provider picker dialog appears. There is no automatic upload on download completion. + +Upload progress is tracked via `DownloadStage.UPLOADING` and displayed with a water-fill cloud icon in history. Uploads can be cancelled via an `AbortController`; cancellation prompts a confirmation dialog. On cancel, `AbortError` (DOMException) is expected and handled silently — not logged as an error. + +### Crash Resilience + +- **S3 multipart**: In-flight `(key, uploadId)` pairs are persisted to `chrome.storage.local`. On service worker startup, `S3Client.cleanupOrphanedUploads()` aborts any orphaned uploads and clears the list. +- **Stuck uploads**: Downloads in `UPLOADING` stage after a crash are restored to `COMPLETED` by `cleanupStaleUploads()` in `init()`. + +### Google Drive Setup + +Users must create their own OAuth credentials (free). The options page shows step-by-step instructions: +1. Enable Google Drive API in Google Cloud Console +2. Create OAuth 2.0 Client ID (type: **Web application**) +3. Add the extension's redirect URI (`chrome.identity.getRedirectURL()`) as an authorized redirect URI +4. Paste the client ID into the options page +5. Clicking "Sign in with Google" auto-saves settings with `enabled: true` + +The `chromiumapp.org` redirect works for unpacked extensions — Chrome intercepts it internally without needing the Chrome Web Store. + +### Adding a New Provider + +Create a class extending `BaseCloudProvider`, add its key to the `CloudProvider` union in `shared/messages.ts`, instantiate and register it in the `UploadManager` constructor — no other code needs to change. + +### S3 Bucket CORS Requirement + +The bucket must have a CORS policy whitelisting the extension origin. The options page S3 section generates the correct JSON dynamically (using `chrome.runtime.id`) and provides a "Copy CORS Config" button. Paste it into **S3 → bucket → Permissions → Cross-origin resource sharing (CORS) → Edit**. Without it the browser will block upload requests from the `chrome-extension://` origin. + +### S3 Secret Key Encryption + +The S3 secret access key can be encrypted at rest with AES-GCM via `SecureStorage`. The user sets a passphrase in the options page; the key is stored as an `EncryptedBlob` in `chrome.storage.local`. On upload, `resolveS3Secret()` in the service worker decrypts it using the passphrase from session storage. ## Architecture @@ -203,11 +243,14 @@ src/ │ │ └── chunks.ts # storeChunk(), deleteChunks(), getChunkCount() │ ├── storage/ │ │ ├── chrome-storage.ts +│ │ ├── secure-storage.ts # AES-GCM encrypt/decrypt for S3 secret key │ │ └── settings.ts # AppSettings interface + loadSettings() — always use this -│ ├── cloud/ # ⚠️ Planned — not wired up yet -│ │ ├── google-auth.ts -│ │ ├── google-drive.ts -│ │ └── upload-manager.ts +│ ├── cloud/ +│ │ ├── base-cloud-provider.ts # Abstract base + ProgressCallback type +│ │ ├── google-auth.ts # OAuth via launchWebAuthFlow (user-provided client ID) +│ │ ├── google-drive.ts # GoogleDriveClient extends BaseCloudProvider +│ │ ├── s3-client.ts # S3Client extends BaseCloudProvider + orphaned upload cleanup +│ │ └── upload-manager.ts # Provider registry (Map) │ ├── metadata/ │ │ └── metadata-extractor.ts │ └── utils/ diff --git a/GEMINI.md b/GEMINI.md new file mode 100644 index 0000000..40dacef --- /dev/null +++ b/GEMINI.md @@ -0,0 +1,257 @@ +# GEMINI.md + +This file provides guidance to Gemini CLI when working with code in this repository. + +## Commands + +```bash +# Development build with watch mode (rebuilds on file changes) +npm run dev + +# Production build (minified, no sourcemaps) +npm run build + +# TypeScript type checking only (no emit) +npm run type-check +``` + +There are no tests in this project. After building, load the `dist/` directory as an unpacked extension in Chrome (`chrome://extensions/` → Developer mode → Load unpacked). + +## FFmpeg.wasm Size Limit + +All HLS, M3U8, and DASH downloads are processed by **FFmpeg.wasm** running inside the browser. Output files are limited to approximately **2 GB** — files beyond this will exhaust browser memory during the merge stage. + +> **Planned**: Migrate FFmpeg.wasm to [mediabunny](https://github.com/nicktindall/mediabunny) for native-speed muxing without the 2 GB constraint. + +## Cloud Upload (Planned) + +`src/core/cloud/` contains the full provider abstraction: + +- `base-cloud-provider.ts` — abstract `BaseCloudProvider` with `id: CloudProvider` and `upload(blob, filename, onProgress?): Promise` +- `google-drive.ts` — `GoogleDriveClient extends BaseCloudProvider` (resumable chunked upload for files > 5 MB) +- `s3-client.ts` — `S3Client extends BaseCloudProvider` (SigV4-signed PUT / multipart for files ≥ 10 MB) +- `upload-manager.ts` — `Map` registry; routes `uploadBlob()` through `client.upload()`; `isConfigured()` checks `providers.size > 0` + +**Upload is manual only** — completed downloads show an "Upload to cloud" action in the History page (Options → History → item menu). Clicking it opens a file picker (`showOpenFilePicker`); the user selects the local video file, which is sent to the service worker via `UPLOAD_REQUEST` message for cloud upload. If both providers are configured, a dialog lets the user choose. There is no automatic upload on download completion. The popup Downloads tab only shows in-progress downloads; finished downloads live in History. + +**To add a new provider**: create a class extending `BaseCloudProvider`, add its key to the `CloudProvider` union in `shared/messages.ts`, instantiate and register it in the `UploadManager` constructor — no other code needs to change. + +**S3 bucket CORS requirement**: the bucket must have a CORS policy whitelisting the extension origin. The options page S3 section generates the correct JSON dynamically (using `chrome.runtime.id`) and provides a "Copy CORS Config" button. Paste it into **S3 → bucket → Permissions → Cross-origin resource sharing (CORS) → Edit**. Without it the browser will block upload requests from the `chrome-extension://` origin. + +## Architecture + +Media Bridge is a Manifest V3 Chrome extension. It has five distinct execution contexts that communicate via `chrome.runtime.sendMessage`: + +1. **Service Worker** (`src/service-worker.ts` → `dist/background.js`): The central orchestrator. Handles all download lifecycle management, routes messages from popup/content scripts, maintains download state, and keeps itself alive during long operations using `chrome.runtime.getPlatformInfo()` heartbeats. Intercepts `.m3u8` and `.mpd` network requests via `chrome.webRequest.onCompleted`. + +2. **Content Script** (`src/content.ts` → `dist/content.js`): Built separately as IIFE (content scripts cannot use ES modules). Runs on all pages, uses `DetectionManager` to find videos via DOM observation and network request interception, and injects download buttons. Proxies fetch requests through the service worker to bypass CORS. + +3. **Offscreen Document** (`src/offscreen/` → `dist/offscreen/`): A hidden page created on demand for FFmpeg.wasm processing. Reads raw segment chunks from IndexedDB, concatenates them, runs FFmpeg to mux into MP4, and returns a blob URL. Communicates with the service worker via messages since it can't use the Chrome downloads API directly. FFmpeg.wasm is single-threaded — all processing calls are serialized through a promise-based `enqueue()` queue to prevent concurrent `ffmpeg.exec()` from corrupting shared WASM state. Intermediate filenames are prefixed with `downloadId` (e.g., `${downloadId}_video.ts`) to avoid collisions in FFmpeg's virtual filesystem. + +4. **Popup** (`src/popup/` → `dist/popup/`): Extension action UI — Videos tab (detected videos), Downloads tab (progress), Manifest tab (manual URL input with quality selector). + +5. **Options Page** (`src/options/` → `dist/options/`): Full settings UI with sidebar navigation. Sections: Download (FFmpeg timeout, max concurrent), History (completed/failed/cancelled download log with infinite scroll), Google Drive, S3, Recording (HLS poll interval tuning), Notifications, and Advanced (retries, backoff, cache sizes, fragment failure rate, IDB sync interval). All settings changes notify via bottom toast. History button in the popup header opens the options page directly on the `#history` anchor. + +### Download Flow + +For HLS/M3U8 downloads: +1. Service worker creates a `DownloadManager`, which delegates to `HlsDownloadHandler` or `M3u8DownloadHandler` +2. Handler parses the playlist, downloads segments concurrently (up to `maxConcurrent`, default 3), stores raw chunks in IndexedDB (`core/database/chunks.ts`) +3. Handler sends `OFFSCREEN_PROCESS_HLS` or `OFFSCREEN_PROCESS_M3U8` message to offscreen document +4. Offscreen document enqueues the job — if another FFmpeg job is running, it waits. Once dequeued, it concatenates chunks from IndexedDB, runs FFmpeg, and returns a blob URL +5. Service worker triggers Chrome download from the blob URL and saves the MP4 + +For DASH downloads: same flow but via `DashDownloadHandler` and `OFFSCREEN_PROCESS_DASH`. No `-bsf:a aac_adtstoasc` bitstream filter (DASH segments are already ISOBMF). Intermediate files use `.mp4` extension instead of `.ts`. + +For direct downloads: the service worker uses `chrome.downloads.download()` directly — no FFmpeg. + +For live recording (HLS or DASH): the recording handler polls the media playlist/MPD at the stream's native interval (derived from `#EXT-X-TARGETDURATION` for HLS, `minimumUpdatePeriod` for DASH), collecting new segments as they appear. Aborting triggers the merge phase rather than a discard. + +### State Persistence + +Download state is persisted in **IndexedDB** (not `chrome.storage`), in the `media-bridge` database (version 3) with two object stores: +- `downloads`: Full `DownloadState` objects keyed by `id` +- `chunks`: Raw `Uint8Array` segments keyed by `[downloadId, index]` + +Configuration lives in `chrome.storage.local` under the `storage_config` key (`StorageConfig` type). Always access config through `loadSettings()` (`core/storage/settings.ts`) which returns a fully-typed `AppSettings` object with all defaults applied — never read `StorageConfig` directly. `AppSettings` covers: `ffmpegTimeout`, `maxConcurrent`, `historyEnabled`, `googleDrive`, `s3`, `recording`, `notifications`, and `advanced`. + +IndexedDB is used as the shared state store because the five execution contexts don't share memory. The service worker writes state via `storeDownload()` (`core/database/downloads.ts`), which is a single IDB `put` upsert keyed by `id`. The popup reads the full list via `getAllDownloads()` on open. The offscreen document reads raw chunks from the `chunks` store during FFmpeg processing. `chrome.storage` is only used for config because it has a 10 MB quota and can't store `ArrayBuffer`. + +Progress updates use two complementary channels: +- **IndexedDB** — durable source of truth; survives popup close/reopen and service worker restarts. Popup reads this on mount. +- **`chrome.runtime.sendMessage` (`DOWNLOAD_PROGRESS`)** — low-latency live updates broadcast by the service worker while the popup is open. Fire-and-forget; missed if popup is closed. + +### Progress Update Design (BasePlaylistHandler) + +`updateProgress()` (`core/downloader/base-playlist-handler.ts`) is the hot-path progress method called after every segment download. It uses two optimizations to avoid overwhelming the service worker event loop: + +1. **`cachedState`** — a class field holding the `DownloadState` object read from IDB on the first call. Every subsequent call mutates this same in-memory object directly (updating `downloaded`, `total`, `percentage`, `speed`, etc.) — zero DB reads. The cache is invalidated (`cachedState = null`) only on `resetDownloadState()` (new download) and `updateStage()` (stage transition), which forces a fresh IDB read to pick up any external changes. + +2. **`DB_SYNC_INTERVAL_MS = 500ms` throttle** — `storeDownload()` is only called if at least 500ms have elapsed since the last write. The popup still receives every update via `notifyProgress()` (which fires unconditionally), but IDB writes are capped at ~2/second regardless of segment download frequency. + +`updateStage()` bypasses both optimizations — it always does a full IDB read + write because stage transitions are rare and need to reflect the true persisted state. + +`HlsRecordingHandler.updateRecordingProgress()` and `DashRecordingHandler.updateRecordingProgress()` also always do a full IDB read + write, but are naturally rate-limited to once per poll cycle (every 1–10 seconds). + +**Do not add `getDownload()` calls inside `updateProgress()` or the `onProgress` callback** — that was the root cause of the UI freezing bug fixed in commit `9f2a21e`. With 3 concurrent downloads each firing per segment, even one extra IDB read per callback produces dozens of blocking reads per second that queue up behind user interaction messages in the service worker event loop. + +### Message Protocol + +All inter-component communication uses the `MessageType` enum in `src/shared/messages.ts`. When adding new message types, add them to this enum and handle them in the service worker's `onMessage` listener switch statement. `CHECK_URL` is used by the options page manifest-check feature to probe a URL's content-type via the service worker (bypassing CORS). + +### Build System + +The Vite config (`vite.config.ts`) has two important quirks: +- **Content script** is built in a separate Vite sub-build (IIFE format, `inlineDynamicImports: true`) triggered by the `build-content-script-as-iife` plugin. This avoids ES module restrictions for content scripts. +- **HTML files** are post-processed by the `move-html-files` plugin to fix script src paths from absolute to relative after Vite moves them. + +FFmpeg WASM files are served from `public/ffmpeg/` and copied to `dist/ffmpeg/` at build time. They are explicitly excluded from Vite's dependency optimization. + +### Path Alias + +`@` resolves to `src/`. Use `@/core/types` instead of relative paths when importing from deep nesting. + +### Format Detection + +`VideoFormat` is a string enum (`VideoFormat.DIRECT | HLS | M3U8 | DASH | UNKNOWN`). The distinctions matter: +- `HLS` — master playlist with `#EXT-X-STREAM-INF` quality variants → `HlsDownloadHandler` +- `M3U8` — direct media playlist with segments → `M3u8DownloadHandler` +- `DASH` — MPEG-DASH `.mpd` manifest → `DashDownloadHandler` + +Use enum values everywhere; the underlying strings are lowercase for IndexedDB backward compatibility. + +### Live Stream Recording + +Both HLS and DASH support live recording via `HlsRecordingHandler` and `DashRecordingHandler`, which extend the shared `BaseRecordingHandler`. The recording handler polls the media playlist/MPD at a fixed interval, downloads new segments as they appear, and merges them into an MP4 when the user stops recording or the stream ends naturally (`#EXT-X-ENDLIST` for HLS; `type="dynamic"` → `type="static"` transition for DASH). Controlled via `AbortSignal` — aborting triggers the merge phase, not a discard. The popup UI shows a REC button (only for live streams) and a dedicated `RECORDING` stage with segment count. + +### Header Injection (declarativeNetRequest) + +`Origin` and `Referer` are **forbidden headers** — browsers silently strip them from `fetch()` calls, even in service worker context. CDNs that require these headers will 404 without them. + +The fix uses `chrome.declarativeNetRequest` dynamic rules (`src/core/downloader/header-rules.ts`) to inject these headers at the network layer. Rules are scoped to the specific CDN path prefix and `initiatorDomains: [chrome.runtime.id]` so they only affect extension requests. Each download handler calls `addHeaderRules()` before downloading and `removeHeaderRules()` in its `finally` block. **Do not** attempt to set `Origin`/`Referer` via `fetch()` headers — it won't work. + +### Stop & Save (Partial Downloads) + +HLS, M3U8, and DASH handlers support saving partial downloads when cancelled. If `shouldSaveOnCancel()` returns true, the handler transitions to the `MERGING` stage with whatever chunks were collected, runs FFmpeg, and saves a partial MP4. The abort signal is cleared before FFmpeg processing to prevent immediate rejection. + +### Constants Ownership + +- `src/shared/constants.ts` — only constants used across **multiple** modules (runtime defaults, pipeline values, storage keys) +- `src/options/constants.ts` — constants used exclusively within the options UI (toast duration, UI bounds for all settings inputs in seconds, validation clamp values) + +**Time representation**: All runtime/storage values use **milliseconds** (`StorageConfig`, `AppSettings`, all handlers). The options UI uses **seconds** exclusively. Conversion happens only in `options.ts`: divide by 1000 on load, multiply by 1000 on save. + +### Options Page Field Validation + +All numeric inputs are validated **before** saving via three helpers in `options.ts`: + +- `validateField(input, min, max, isInteger?)` — parses the value, returns the number on success or `null` on failure. Calls `markInvalid` automatically. +- `markInvalid(input, message)` — adds `.invalid` class (red border) and inserts a `.form-error` div after the input. Registers a one-time `input` listener to auto-clear when the user edits. +- `clearInvalid(input)` — removes `.invalid` and the `.form-error` div. + +Each save handler validates all fields upfront and returns early if any are invalid — the button is never disabled and no write is attempted. Cross-field constraints (e.g. `pollMin < pollMax`) call `markInvalid` on the relevant field directly rather than relying on the toast. The toast is reserved for storage/network errors. + +### History + +Completed, failed, and cancelled downloads are persisted in IndexedDB when `historyEnabled` (default `true`) is set. The options page History section renders all finished downloads with infinite scroll (`IntersectionObserver`). From history, users can re-download (reuses stored metadata for filename), copy the original URL, or delete entries. `bulkDeleteDownloads()` (`core/database/downloads.ts`) handles batch removal. The popup "History" button navigates to `options.html#history`. + +### Post-Download Actions + +After a download completes, `handlePostDownloadActions()` in the service worker reads `AppSettings.notifications` and optionally fires an OS notification (`notifyOnCompletion`) or opens the file in Finder/Explorer (`autoOpenFile`). + +### DASH-Specific Notes + +- No `-bsf:a aac_adtstoasc` bitstream filter — DASH segments are already in ISOBMF container format +- Intermediate files use `.mp4` extension (not `.ts`) +- Live detection: `type="dynamic"` attribute in the MPD root element +- Poll interval: `minimumUpdatePeriod` attribute in the MPD +- DRM detection: presence of `` elements in any `AdaptationSet` +- mpd-parser v1.3.1 is used; type declarations are in `src/types/mpd-parser.d.ts` (no `@types` package available) + +### Project Structure + +``` +src/ +├── service-worker.ts # Central orchestrator +├── content.ts # Content script (IIFE) +├── shared/ +│ ├── messages.ts # MessageType enum +│ └── constants.ts # DEFAULT_MAX_CONCURRENT, DEFAULT_FFMPEG_TIMEOUT_MS, etc. +├── core/ +│ ├── types/ +│ │ └── index.ts # VideoFormat, DownloadState, DownloadStage, Fragment, Level +│ ├── detection/ +│ │ ├── detection-manager.ts +│ │ ├── thumbnail-utils.ts +│ │ ├── direct/direct-detection-handler.ts +│ │ ├── hls/hls-detection-handler.ts +│ │ └── dash/dash-detection-handler.ts +│ ├── downloader/ +│ │ ├── download-manager.ts +│ │ ├── base-playlist-handler.ts # Hot-path progress, cachedState, 500ms throttle +│ │ ├── base-recording-handler.ts # Shared polling loop for live streams +│ │ ├── concurrent-workers.ts +│ │ ├── crypto-utils.ts # AES-128 decryption +│ │ ├── header-rules.ts # DNR Origin/Referer injection +│ │ ├── types.ts +│ │ ├── direct/direct-download-handler.ts +│ │ ├── hls/hls-download-handler.ts +│ │ ├── hls/hls-recording-handler.ts +│ │ ├── m3u8/m3u8-download-handler.ts +│ │ ├── dash/dash-download-handler.ts +│ │ └── dash/dash-recording-handler.ts +│ ├── parsers/ +│ │ ├── m3u8-parser.ts # HLS parsing (wraps m3u8-parser) +│ │ ├── mpd-parser.ts # DASH parsing (wraps mpd-parser) +│ │ └── playlist-utils.ts # ParsedPlaylist, ParsedSegment, parseLevelsPlaylist() +│ ├── ffmpeg/ +│ │ ├── ffmpeg-bridge.ts +│ │ ├── ffmpeg-singleton.ts +│ │ └── offscreen-manager.ts +│ ├── database/ +│ │ ├── connection.ts # IDB init (media-bridge v3) +│ │ ├── downloads.ts # storeDownload(), getDownload(), etc. +│ │ └── chunks.ts # storeChunk(), deleteChunks(), getChunkCount() +│ ├── storage/ +│ │ ├── chrome-storage.ts +│ │ └── settings.ts # AppSettings interface + loadSettings() — always use this +│ ├── cloud/ # ⚠️ Upload trigger not wired up yet +│ │ ├── base-cloud-provider.ts # Abstract base + ProgressCallback type +│ │ ├── google-auth.ts +│ │ ├── google-drive.ts # GoogleDriveClient extends BaseCloudProvider +│ │ ├── s3-client.ts # S3Client extends BaseCloudProvider +│ │ └── upload-manager.ts # Provider registry (Map) +│ ├── metadata/ +│ │ └── metadata-extractor.ts +│ └── utils/ +│ ├── blob-utils.ts +│ ├── cancellation.ts +│ ├── download-utils.ts +│ ├── drm-utils.ts +│ ├── errors.ts # MediaBridgeError hierarchy +│ ├── fetch-utils.ts +│ ├── file-utils.ts +│ ├── format-utils.ts +│ ├── id-utils.ts +│ ├── logger.ts +│ └── url-utils.ts +├── popup/ +│ ├── popup.ts / popup.html +│ ├── state.ts +│ ├── tabs.ts +│ ├── render-downloads.ts +│ ├── render-videos.ts +│ ├── render-manifest.ts +│ ├── download-actions.ts +│ └── utils.ts +├── options/ +│ ├── options.ts / options.html +│ └── constants.ts # Options-page-only constants (UI bounds, toast duration) +├── offscreen/ +│ ├── offscreen.ts / offscreen.html +└── types/ + └── mpd-parser.d.ts +``` + + +When I start upload, I need to show a smal loading bar in form of the icon we use for upload button, needs to it show the upload progress. when completed, remove these and add a new badge "uploaded" to the history card. Also, keep showing th re-upload button. do not show inprogress upload in the download tab. diff --git a/README.md b/README.md index 7fd19dd..0ee436e 100644 --- a/README.md +++ b/README.md @@ -16,6 +16,7 @@ A Manifest V3 Chromium extension that detects and downloads videos from the web - **Header Injection**: Injects `Origin`/`Referer` headers via `declarativeNetRequest` for CDNs that require them - **Download History**: Completed, failed, and cancelled downloads are persisted and browsable in the options page History section with infinite scroll - **Notifications**: Optional OS notification and auto-open file on download completion +- **Cloud Upload**: Upload completed downloads to Google Drive or S3-compatible storage from the History page - **Configurable Settings**: Recording poll intervals, fetch retry behaviour, detection cache sizes, IDB sync rate — all tunable from the options page ## ⚠️ Output File Size Limit @@ -24,11 +25,31 @@ Because video processing uses **FFmpeg.wasm** (a WebAssembly build of FFmpeg run > **Planned**: A future release will replace FFmpeg.wasm with [mediabunny](https://github.com/nicktindall/mediabunny) for native-speed muxing without the 2 GB constraint. -## Planned Features +## Cloud Upload Setup -The following features are planned but not yet implemented: +Completed downloads can be uploaded to **Google Drive** or **S3-compatible storage** from **Options → History → item menu → Upload to cloud**. -- **Cloud storage uploads**: The code infrastructure for Google Drive exists (`core/cloud/`) but is not wired up — no uploads are triggered after downloads complete. Future versions will support Google Drive and other cloud providers (S3, Dropbox, etc.). +### Google Drive + +Google Drive requires you to create your own OAuth credentials (free): + +1. Go to [Google Cloud Console](https://console.cloud.google.com/) and create a project (or use an existing one). +2. Enable the **[Google Drive API](https://console.cloud.google.com/apis/library/drive.googleapis.com)**. +3. Go to **[Credentials](https://console.cloud.google.com/apis/credentials)** → **Create Credentials** → **OAuth client ID**. +4. Set application type to **Web application**. +5. Under **Authorized redirect URIs**, add your extension's redirect URI. + - Find it in **Options → Cloud Providers → Google Drive** — it's shown next to the Client ID field. + - It looks like `https://.chromiumapp.org/` +6. Copy the **Client ID** and paste it into the options page. +7. Click **Sign in with Google** to authorize. + +> **Note:** If you haven't configured a consent screen yet, Google will prompt you to create one. Choose **External** user type, fill in the required fields, and add yourself as a test user. The app will work in "Testing" mode — no verification needed for personal use. + +### S3 / S3-Compatible Storage + +1. In **Options → Cloud Providers → S3**, enter your bucket name, region, access key ID, and secret access key. +2. Your S3 bucket must have a CORS policy that allows the extension origin. The options page generates the correct JSON and provides a **Copy CORS Config** button — paste it into **S3 → Bucket → Permissions → CORS**. +3. Works with AWS S3, Cloudflare R2, Backblaze B2, Wasabi, MinIO, and any S3-compatible provider. ## Installation @@ -173,10 +194,11 @@ src/ │ ├── storage/ │ │ ├── chrome-storage.ts # Raw chrome.storage.local access │ │ └── settings.ts # AppSettings interface + loadSettings() — always use this -│ ├── cloud/ # ⚠️ Planned — infrastructure exists, not yet wired up -│ │ ├── google-auth.ts -│ │ ├── google-drive.ts -│ │ └── upload-manager.ts +│ ├── cloud/ # Google Drive + S3 upload providers +│ │ ├── google-auth.ts # OAuth via launchWebAuthFlow (user-provided client ID) +│ │ ├── google-drive.ts # Resumable upload (chunked for files > 5 MB) +│ │ ├── s3-client.ts # SigV4-signed PUT / multipart upload +│ │ └── upload-manager.ts # Provider registry + routing │ ├── metadata/ │ │ └── metadata-extractor.ts │ └── utils/ @@ -203,7 +225,7 @@ src/ - `storage` — Config persistence - `downloads` — Save downloaded files -- `identity` — OAuth (reserved for future cloud upload) +- `identity` — Google OAuth via `launchWebAuthFlow` - `activeTab` / `scripting` — Content script injection - `offscreen` — Offscreen document for FFmpeg.wasm - `unlimitedStorage` — Large segment storage in IndexedDB diff --git a/public/shared.css b/public/shared.css index 8a4756b..eaa0630 100644 --- a/public/shared.css +++ b/public/shared.css @@ -142,6 +142,11 @@ body { color: var(--success); } +.badge-uploaded { + background: rgba(59, 130, 246, 0.12); + color: #3b82f6; +} + .badge-failed { background: rgba(248, 113, 113, 0.12); color: var(--error); @@ -178,6 +183,11 @@ body { color: #16a34a; } +:root.light-mode .badge-uploaded { + background: rgba(37, 99, 235, 0.08); + color: #2563eb; +} + :root.light-mode .badge-failed { background: rgba(220, 38, 38, 0.08); color: #dc2626; @@ -187,3 +197,38 @@ body { background: rgba(217, 119, 6, 0.08); color: #d97706; } + +/* ---- Segmented Tabs (shared pill style) ---- */ +.seg-tabs { + display: flex; + gap: 4px; + padding: 5px; + background: var(--surface-2); + border-radius: var(--radius-sm); + margin-bottom: 20px; + width: fit-content; +} + +.seg-tab { + padding: 7px 14px; + border: none; + outline: none; + border-radius: calc(var(--radius-sm) - 2px); + font-family: inherit; + font-size: 13px; + font-weight: 500; + cursor: pointer; + background: transparent; + color: var(--text-secondary); + transition: all 0.15s; +} + +.seg-tab:hover { + color: var(--text-primary); +} + +.seg-tab.active { + background: var(--surface-3); + color: var(--text-primary); + box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2); +} diff --git a/src/core/cloud/base-cloud-provider.ts b/src/core/cloud/base-cloud-provider.ts new file mode 100644 index 0000000..dc393d4 --- /dev/null +++ b/src/core/cloud/base-cloud-provider.ts @@ -0,0 +1,18 @@ +import { CloudProvider } from '../../shared/messages'; + +export type ProgressCallback = (uploaded: number, total: number) => void; + +export abstract class BaseCloudProvider { + abstract readonly id: CloudProvider; + + /** + * Upload blob to this provider. + * Returns the shareable URL of the uploaded file. + */ + abstract upload( + blob: Blob, + filename: string, + onProgress?: ProgressCallback, + signal?: AbortSignal, + ): Promise; +} diff --git a/src/core/cloud/google-auth.ts b/src/core/cloud/google-auth.ts index 3bc8363..cc4ca29 100644 --- a/src/core/cloud/google-auth.ts +++ b/src/core/cloud/google-auth.ts @@ -1,5 +1,6 @@ /** - * Google OAuth using chrome.identity API + * Google OAuth using chrome.identity.launchWebAuthFlow. + * Users supply their own OAuth client ID via the options page. */ import { AuthError } from "../utils/errors"; @@ -8,7 +9,6 @@ import { ChromeStorage } from "../storage/chrome-storage"; const CLIENT_ID_STORAGE_KEY = "google_client_id"; const TOKEN_STORAGE_KEY = "google_access_token"; -const REFRESH_TOKEN_STORAGE_KEY = "google_refresh_token"; // Google OAuth scopes export const GOOGLE_DRIVE_SCOPES = [ @@ -35,7 +35,7 @@ export class GoogleAuth { } /** - * Authenticate with Google + * Authenticate with Google via launchWebAuthFlow (implicit grant). */ static async authenticate(scopes: string[]): Promise { try { @@ -46,14 +46,17 @@ export class GoogleAuth { ); } - // Use chrome.identity.getAuthToken for OAuth - const token = await new Promise((resolve, reject) => { - chrome.identity.getAuthToken( - { - interactive: true, - scopes: scopes, - }, - (token) => { + const redirectUrl = chrome.identity.getRedirectURL(); + const authUrl = new URL("https://accounts.google.com/o/oauth2/v2/auth"); + authUrl.searchParams.set("client_id", clientId); + authUrl.searchParams.set("redirect_uri", redirectUrl); + authUrl.searchParams.set("response_type", "token"); + authUrl.searchParams.set("scope", scopes.join(" ")); + + const responseUrl = await new Promise((resolve, reject) => { + chrome.identity.launchWebAuthFlow( + { url: authUrl.toString(), interactive: true }, + (callbackUrl) => { if (chrome.runtime.lastError) { reject( new AuthError( @@ -62,16 +65,24 @@ export class GoogleAuth { ); return; } - if (!token) { - reject(new AuthError("No token received")); + if (!callbackUrl) { + reject(new AuthError("No callback URL received")); return; } - resolve(token); + resolve(callbackUrl); }, ); }); - // Store token + // Extract access_token from the URL fragment + const hashParams = new URLSearchParams( + new URL(responseUrl).hash.slice(1), + ); + const token = hashParams.get("access_token"); + if (!token) { + throw new AuthError("No access token in OAuth response"); + } + await ChromeStorage.set(TOKEN_STORAGE_KEY, token); logger.info("Google authentication successful"); @@ -103,7 +114,6 @@ export class GoogleAuth { */ private static async isTokenValid(token: string): Promise { try { - // Verify token by making a simple API call const response = await fetch( "https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=" + token, ); @@ -121,32 +131,15 @@ export class GoogleAuth { if (token) { try { - // Revoke token using Google API await fetch( `https://accounts.google.com/o/oauth2/revoke?token=${token}`, ); - - // Remove token from Chrome identity - await new Promise((resolve, reject) => { - chrome.identity.removeCachedAuthToken({ token }, () => { - if (chrome.runtime.lastError) { - logger.warn( - "Failed to remove cached token:", - chrome.runtime.lastError, - ); - } - resolve(); - }); - }); } catch (error) { logger.warn("Failed to revoke token:", error); } } - // Clear stored tokens await ChromeStorage.remove(TOKEN_STORAGE_KEY); - await ChromeStorage.remove(REFRESH_TOKEN_STORAGE_KEY); - logger.info("Signed out successfully"); } diff --git a/src/core/cloud/google-drive.ts b/src/core/cloud/google-drive.ts index 2d7e1ff..a0accaf 100644 --- a/src/core/cloud/google-drive.ts +++ b/src/core/cloud/google-drive.ts @@ -3,11 +3,14 @@ */ import { GoogleAuth, GOOGLE_DRIVE_SCOPES } from "./google-auth"; +import { BaseCloudProvider, ProgressCallback } from "./base-cloud-provider"; import { UploadError } from "../utils/errors"; import { logger } from "../utils/logger"; const DRIVE_API_BASE = "https://www.googleapis.com/drive/v3"; +const DRIVE_UPLOAD_BASE = "https://www.googleapis.com/upload/drive/v3"; const RESUMABLE_UPLOAD_THRESHOLD_BYTES = 5 * 1024 * 1024; // 5 MB +const CHUNK_SIZE = 8 * 1024 * 1024; // 8 MB — must be a multiple of 256 KB export interface GoogleDriveConfig { targetFolderId?: string; @@ -20,17 +23,24 @@ export interface UploadResult { webViewLink?: string; } -export class GoogleDriveClient { +export class GoogleDriveClient extends BaseCloudProvider { + readonly id = 'googleDrive' as const; private config: GoogleDriveConfig; constructor(config: GoogleDriveConfig = {}) { + super(); this.config = config; } /** - * Upload file to Google Drive + * Upload file to Google Drive. Returns the webViewLink (or fileId as fallback). */ - async uploadFile(blob: Blob, filename: string): Promise { + async upload( + blob: Blob, + filename: string, + onProgress?: ProgressCallback, + signal?: AbortSignal, + ): Promise { try { const token = await GoogleAuth.getAccessToken(GOOGLE_DRIVE_SCOPES); @@ -42,18 +52,25 @@ export class GoogleDriveClient { ); } - // For files larger than 5MB, use resumable upload + // For files larger than 5MB, use resumable chunked upload + let result: UploadResult; if (blob.size > RESUMABLE_UPLOAD_THRESHOLD_BYTES) { - return await this.resumableUpload(blob, filename, token, folderId); + result = await this.resumableUpload(blob, filename, token, folderId, onProgress, signal); + } else { + // Simple multipart upload for smaller files + result = await this.simpleUpload(blob, filename, token, folderId, signal); } - // Simple upload for smaller files - return await this.simpleUpload(blob, filename, token, folderId); + return result.webViewLink ?? result.fileId; } catch (error) { + // Abort is expected on cancel — not an error + if (error instanceof DOMException && error.name === 'AbortError') { + throw error; + } logger.error("Google Drive upload failed:", error); throw error instanceof UploadError ? error - : new UploadError(`Upload failed: ${error}`); + : new UploadError(`Upload failed: ${error instanceof Error ? error.message : String(error)}`); } } @@ -65,6 +82,7 @@ export class GoogleDriveClient { filename: string, token: string, folderId?: string, + signal?: AbortSignal, ): Promise { const metadata: any = { name: filename, @@ -82,13 +100,14 @@ export class GoogleDriveClient { form.append("file", blob); const response = await fetch( - `${DRIVE_API_BASE}/files?uploadType=multipart`, + `${DRIVE_UPLOAD_BASE}/files?uploadType=multipart`, { method: "POST", headers: { Authorization: `Bearer ${token}`, }, body: form, + signal, }, ); @@ -113,77 +132,130 @@ export class GoogleDriveClient { } /** - * Resumable upload (for files > 5MB) + * Resumable chunked upload (for files > 5 MB). + * + * Protocol: + * 1. POST to initiate session → get Location (session URI) + * 2. PUT chunks in CHUNK_SIZE increments with Content-Range header + * - Intermediate chunks → 308 Resume Incomplete; read Range header for offset + * - Final chunk → 200 OK / 201 Created with file metadata */ private async resumableUpload( blob: Blob, filename: string, token: string, folderId?: string, + onProgress?: ProgressCallback, + signal?: AbortSignal, ): Promise { - // Step 1: Initialize resumable upload session - const metadata: any = { - name: filename, - }; + const totalBytes = blob.size; + const mimeType = blob.type || "application/octet-stream"; - if (folderId) { - metadata.parents = [folderId]; - } + // Step 1: Initiate resumable session + const metadata: Record = { name: filename }; + if (folderId) metadata.parents = [folderId]; const initResponse = await fetch( - `${DRIVE_API_BASE}/files?uploadType=resumable`, + `${DRIVE_UPLOAD_BASE}/files?uploadType=resumable`, { method: "POST", headers: { Authorization: `Bearer ${token}`, - "Content-Type": "application/json", + "Content-Type": "application/json; charset=UTF-8", + "X-Upload-Content-Type": mimeType, + "X-Upload-Content-Length": totalBytes.toString(), }, body: JSON.stringify(metadata), }, ); if (!initResponse.ok) { - const error = await initResponse + const err = await initResponse .json() .catch(() => ({ error: { message: initResponse.statusText } })); throw new UploadError( - `Failed to initialize upload: ${ - error.error?.message || initResponse.statusText - }`, + `Failed to initialize resumable upload: ${err.error?.message || initResponse.statusText}`, initResponse.status, ); } - const uploadUrl = initResponse.headers.get("Location"); - if (!uploadUrl) { - throw new UploadError("No upload URL received"); + const sessionUri = initResponse.headers.get("Location"); + if (!sessionUri) { + throw new UploadError("Drive did not return a resumable session URI"); } - // Step 2: Upload file data - const uploadResponse = await fetch(uploadUrl, { - method: "PUT", - headers: { - "Content-Type": blob.type || "application/octet-stream", - "Content-Length": blob.size.toString(), - }, - body: blob, - }); + // Step 2: Upload chunks + let offset = 0; - if (!uploadResponse.ok) { - throw new UploadError( - `Upload failed: ${uploadResponse.statusText}`, - uploadResponse.status, - ); - } + while (offset < totalBytes) { + signal?.throwIfAborted(); + const end = Math.min(offset + CHUNK_SIZE, totalBytes); + const chunk = blob.slice(offset, end); + const chunkSize = end - offset; + const isLast = end === totalBytes; - const result = await uploadResponse.json(); + const response = await fetch(sessionUri, { + method: "PUT", + headers: { + "Content-Length": chunkSize.toString(), + "Content-Range": `bytes ${offset}-${end - 1}/${totalBytes}`, + "Content-Type": mimeType, + }, + body: chunk, + signal, + }); - logger.info(`File uploaded successfully (resumable): ${result.id}`); + // Session expired — cannot recover without restarting + if (response.status === 404) { + throw new UploadError("Resumable upload session expired (404)"); + } - return { - fileId: result.id, - webViewLink: result.webViewLink, - }; + // 5xx errors: could query server position and retry, but keep it simple + if (response.status >= 500) { + throw new UploadError( + `Server error during chunk upload: ${response.status} ${response.statusText}`, + response.status, + ); + } + + if (isLast) { + // Final chunk: expect 200 or 201 + if (response.status !== 200 && response.status !== 201) { + throw new UploadError( + `Unexpected status for final chunk: ${response.status}`, + response.status, + ); + } + const result = await response.json(); + logger.info(`Drive upload complete (resumable): ${result.id}`); + onProgress?.(totalBytes, totalBytes); + return { fileId: result.id, webViewLink: result.webViewLink }; + } + + // Intermediate chunk: expect 308 Resume Incomplete + if (response.status !== 308) { + throw new UploadError( + `Unexpected status for intermediate chunk: ${response.status}`, + response.status, + ); + } + + // Advance offset from server-confirmed Range header + const rangeHeader = response.headers.get("Range"); + if (rangeHeader) { + // Format: "bytes=0-N" + const confirmedEnd = parseInt(rangeHeader.split("-")[1], 10); + offset = confirmedEnd + 1; + } else { + // Server received nothing yet — retry from same offset + logger.warn("Drive returned 308 with no Range header; retrying chunk from same offset"); + } + + onProgress?.(offset, totalBytes); + } + + // Should be unreachable + throw new UploadError("Resumable upload loop exited without completing"); } /** diff --git a/src/core/cloud/s3-client.ts b/src/core/cloud/s3-client.ts new file mode 100644 index 0000000..ba21ce9 --- /dev/null +++ b/src/core/cloud/s3-client.ts @@ -0,0 +1,569 @@ +/** + * S3-compatible upload client with SigV4 request signing. + * Works with AWS S3, Cloudflare R2, Backblaze B2, Wasabi, MinIO, and any + * S3-compatible provider that accepts path-style or virtual-hosted-style URLs. + * + * Uses Web Crypto API — no external dependencies. + */ + +import { BaseCloudProvider, ProgressCallback } from "./base-cloud-provider"; +import { UploadError } from "../utils/errors"; +import { logger } from "../utils/logger"; + +const S3_PENDING_UPLOADS_KEY = "s3_pending_uploads"; + +interface PendingUpload { + key: string; + uploadId: string; +} + +export interface S3Config { + bucket: string; + region: string; + accessKeyId: string; + secretAccessKey: string; + /** Custom endpoint for S3-compatible providers. Defaults to AWS S3 virtual-hosted URL. */ + endpoint?: string; + /** Key prefix prepended to all uploaded object names. */ + prefix?: string; +} + +export interface S3UploadResult { + /** The public URL of the uploaded object (path-style). */ + url: string; + key: string; +} + +// S3 requires each part to be >= 5 MB (except the last). Use 10 MB parts. +// Threshold matches part size so every file >= 10 MB gets chunked progress. +// Files < 10 MB are uploaded as a single PUT (S3 rejects multipart parts < 5 MB). +const PART_SIZE = 10 * 1024 * 1024; +const MULTIPART_THRESHOLD = PART_SIZE; + +export class S3Client extends BaseCloudProvider { + readonly id = 's3' as const; + private readonly config: S3Config; + + constructor(config: S3Config) { + super(); + this.config = config; + } + + async upload( + blob: Blob, + filename: string, + onProgress?: ProgressCallback, + signal?: AbortSignal, + ): Promise { + const key = this.config.prefix + ? `${this.config.prefix.replace(/\/$/, "")}/${filename}` + : filename; + + let result: S3UploadResult; + if (blob.size >= MULTIPART_THRESHOLD) { + result = await this.multipartUpload(blob, key, onProgress, signal); + } else { + result = await this.putUpload(blob, key, onProgress, signal); + } + return result.url; + } + + /** Single-part PUT upload for files < 10 MB */ + private async putUpload( + blob: Blob, + key: string, + onProgress?: ProgressCallback, + signal?: AbortSignal, + ): Promise { + const url = this.objectUrl(key); + const buffer = await blob.arrayBuffer(); + const payloadHash = await sha256hex(buffer); + + const now = new Date(); + const datetime = isoDatetime(now); + const date = datetime.slice(0, 8); + + const headers: Record = { + "Content-Type": blob.type || "video/mp4", + "Content-Length": String(blob.size), + "x-amz-content-sha256": payloadHash, + "x-amz-date": datetime, + Host: new URL(url).host, + }; + + const authorization = await this.buildAuthorization( + "PUT", + new URL(url), + headers, + payloadHash, + datetime, + date, + ); + headers["Authorization"] = authorization; + delete headers["Host"]; // fetch adds it automatically + + const response = await fetch(url, { + method: "PUT", + headers, + body: buffer, + signal, + }); + + if (!response.ok) { + const text = await response.text().catch(() => response.statusText); + throw new UploadError( + `S3 PUT failed (${response.status}): ${text}`, + response.status, + ); + } + + onProgress?.(blob.size, blob.size); + logger.info(`S3 upload complete: ${key}`); + return { url, key }; + } + + /** Multipart upload for files >= 10 MB */ + private async multipartUpload( + blob: Blob, + key: string, + onProgress?: ProgressCallback, + signal?: AbortSignal, + ): Promise { + // 1. Initiate + const uploadId = await this.initiateMultipart(key); + await this.savePendingUpload(key, uploadId); + const parts: Array<{ PartNumber: number; ETag: string }> = []; + let uploadedBytes = 0; + + try { + const totalParts = Math.ceil(blob.size / PART_SIZE); + + for (let i = 0; i < totalParts; i++) { + signal?.throwIfAborted(); + const start = i * PART_SIZE; + const end = Math.min(start + PART_SIZE, blob.size); + const partBlob = blob.slice(start, end); + const partNumber = i + 1; + + const etag = await this.uploadPart(key, uploadId, partNumber, partBlob, signal); + parts.push({ PartNumber: partNumber, ETag: etag }); + + uploadedBytes += partBlob.size; + onProgress?.(uploadedBytes, blob.size); + } + + // 2. Complete + await this.completeMultipart(key, uploadId, parts); + await this.clearPendingUpload(key, uploadId); + } catch (err) { + // Abort on failure to avoid orphaned multipart uploads + await this.abortMultipart(key, uploadId).catch((e) => + logger.warn("Failed to abort multipart upload:", e), + ); + await this.clearPendingUpload(key, uploadId); + throw err; + } + + const url = this.objectUrl(key); + logger.info(`S3 multipart upload complete: ${key}`); + return { url, key }; + } + + private async initiateMultipart(key: string): Promise { + const url = `${this.objectUrl(key)}?uploads`; + const now = new Date(); + const datetime = isoDatetime(now); + const date = datetime.slice(0, 8); + const payloadHash = await sha256hex(""); + + const headers: Record = { + "Content-Type": "video/mp4", + "x-amz-content-sha256": payloadHash, + "x-amz-date": datetime, + Host: new URL(url).host, + }; + const authorization = await this.buildAuthorization( + "POST", + new URL(url), + headers, + payloadHash, + datetime, + date, + ); + headers["Authorization"] = authorization; + delete headers["Host"]; + + const response = await fetch(url, { method: "POST", headers }); + if (!response.ok) { + const text = await response.text().catch(() => response.statusText); + throw new UploadError( + `Failed to initiate multipart upload (${response.status}): ${text}`, + response.status, + ); + } + + const xml = await response.text(); + const match = xml.match(/(.+?)<\/UploadId>/); + if (!match?.[1]) throw new UploadError("No UploadId in response"); + return match[1]; + } + + private async uploadPart( + key: string, + uploadId: string, + partNumber: number, + blob: Blob, + signal?: AbortSignal, + ): Promise { + const baseUrl = this.objectUrl(key); + const url = `${baseUrl}?partNumber=${partNumber}&uploadId=${encodeURIComponent(uploadId)}`; + const buffer = await blob.arrayBuffer(); + const payloadHash = await sha256hex(buffer); + const now = new Date(); + const datetime = isoDatetime(now); + const date = datetime.slice(0, 8); + + const headers: Record = { + "Content-Length": String(blob.size), + "x-amz-content-sha256": payloadHash, + "x-amz-date": datetime, + Host: new URL(url).host, + }; + const authorization = await this.buildAuthorization( + "PUT", + new URL(url), + headers, + payloadHash, + datetime, + date, + ); + headers["Authorization"] = authorization; + delete headers["Host"]; + + const response = await fetch(url, { method: "PUT", headers, body: buffer, signal }); + if (!response.ok) { + const text = await response.text().catch(() => response.statusText); + throw new UploadError( + `Part ${partNumber} upload failed (${response.status}): ${text}`, + response.status, + ); + } + + const etag = response.headers.get("ETag") ?? ""; + return etag.replace(/"/g, ""); + } + + private async completeMultipart( + key: string, + uploadId: string, + parts: Array<{ PartNumber: number; ETag: string }>, + ): Promise { + const url = `${this.objectUrl(key)}?uploadId=${encodeURIComponent(uploadId)}`; + const body = [ + "", + ...parts.map( + (p) => + `${p.PartNumber}${p.ETag}`, + ), + "", + ].join(""); + + const now = new Date(); + const datetime = isoDatetime(now); + const date = datetime.slice(0, 8); + const payloadHash = await sha256hex(body); + + const headers: Record = { + "Content-Type": "application/xml", + "x-amz-content-sha256": payloadHash, + "x-amz-date": datetime, + Host: new URL(url).host, + }; + const authorization = await this.buildAuthorization( + "POST", + new URL(url), + headers, + payloadHash, + datetime, + date, + ); + headers["Authorization"] = authorization; + delete headers["Host"]; + + const response = await fetch(url, { method: "POST", headers, body }); + if (!response.ok) { + const text = await response.text().catch(() => response.statusText); + throw new UploadError( + `Failed to complete multipart upload: ${text}`, + response.status, + ); + } + } + + async abortMultipart(key: string, uploadId: string): Promise { + const url = `${this.objectUrl(key)}?uploadId=${encodeURIComponent(uploadId)}`; + const now = new Date(); + const datetime = isoDatetime(now); + const date = datetime.slice(0, 8); + const payloadHash = await sha256hex(""); + + const headers: Record = { + "x-amz-content-sha256": payloadHash, + "x-amz-date": datetime, + Host: new URL(url).host, + }; + const authorization = await this.buildAuthorization( + "DELETE", + new URL(url), + headers, + payloadHash, + datetime, + date, + ); + headers["Authorization"] = authorization; + delete headers["Host"]; + + await fetch(url, { method: "DELETE", headers }); + } + + private async savePendingUpload(key: string, uploadId: string): Promise { + const data = await chrome.storage.local.get(S3_PENDING_UPLOADS_KEY); + const pending: PendingUpload[] = data[S3_PENDING_UPLOADS_KEY] ?? []; + pending.push({ key, uploadId }); + await chrome.storage.local.set({ [S3_PENDING_UPLOADS_KEY]: pending }); + } + + private async clearPendingUpload(key: string, uploadId: string): Promise { + const data = await chrome.storage.local.get(S3_PENDING_UPLOADS_KEY); + const pending: PendingUpload[] = data[S3_PENDING_UPLOADS_KEY] ?? []; + const filtered = pending.filter( + (p) => !(p.key === key && p.uploadId === uploadId), + ); + await chrome.storage.local.set({ [S3_PENDING_UPLOADS_KEY]: filtered }); + } + + /** + * Abort any orphaned multipart uploads from previous service worker crashes. + * Call from service worker init(). No-ops if S3 isn't configured or no pending uploads exist. + */ + static async cleanupOrphanedUploads(config?: S3Config): Promise { + const data = await chrome.storage.local.get(S3_PENDING_UPLOADS_KEY); + const pending: PendingUpload[] = data[S3_PENDING_UPLOADS_KEY] ?? []; + if (pending.length === 0) return; + + if (!config) { + logger.warn( + `Found ${pending.length} orphaned S3 multipart upload(s) but S3 is not configured — clearing records`, + ); + await chrome.storage.local.set({ [S3_PENDING_UPLOADS_KEY]: [] }); + return; + } + + const client = new S3Client(config); + for (const { key, uploadId } of pending) { + await client.abortMultipart(key, uploadId).catch((e) => + logger.warn(`Failed to abort orphaned multipart upload ${uploadId}:`, e), + ); + } + await chrome.storage.local.set({ [S3_PENDING_UPLOADS_KEY]: [] }); + logger.info(`Cleaned up ${pending.length} orphaned S3 multipart upload(s)`); + } + + /** Build SigV4 Authorization header value */ + private async buildAuthorization( + method: string, + url: URL, + headers: Record, + payloadHash: string, + datetime: string, + date: string, + ): Promise { + const region = this.config.region; + const service = "s3"; + + // Sorted canonical headers (lowercase names) + const signedHeaderNames = Object.keys(headers) + .map((k) => k.toLowerCase()) + .sort(); + + const canonicalHeaders = + signedHeaderNames + .map((name) => { + const value = + headers[ + Object.keys(headers).find((k) => k.toLowerCase() === name)! + ]; + return `${name}:${value.trim()}`; + }) + .join("\n") + "\n"; + + const signedHeaders = signedHeaderNames.join(";"); + + // Canonical query string (sorted) + const queryParams = Array.from(url.searchParams.entries()) + .sort(([a], [b]) => a.localeCompare(b)) + .map(([k, v]) => `${encodeURIComponent(k)}=${encodeURIComponent(v)}`) + .join("&"); + + // url.pathname uses RFC 3986 encoding which leaves sub-delimiters like + // ( ) [ ] ! ' * unencoded, but SigV4 requires encoding everything except + // unreserved chars (A-Za-z0-9 - _ . ~). Decode then re-encode per SigV4. + const canonicalPath = sigV4EncodePath(decodeURIComponent(url.pathname)); + + const canonicalRequest = [ + method, + canonicalPath, + queryParams, + canonicalHeaders, + signedHeaders, + payloadHash, + ].join("\n"); + + const credentialScope = `${date}/${region}/${service}/aws4_request`; + const stringToSign = [ + "AWS4-HMAC-SHA256", + datetime, + credentialScope, + await sha256hex(canonicalRequest), + ].join("\n"); + + const signingKey = await deriveSigningKey( + this.config.secretAccessKey, + date, + region, + service, + ); + const signature = buf2hex(await hmacSha256(signingKey, stringToSign)); + + return ( + `AWS4-HMAC-SHA256 ` + + `Credential=${this.config.accessKeyId}/${credentialScope}, ` + + `SignedHeaders=${signedHeaders}, ` + + `Signature=${signature}` + ); + } + + private objectUrl(key: string): string { + if (this.config.endpoint) { + // Path-style: endpoint/bucket/key + const base = this.config.endpoint.replace(/\/$/, ""); + return `${base}/${this.config.bucket}/${key}`; + } + // AWS virtual-hosted style + return `https://${this.config.bucket}.s3.${this.config.region}.amazonaws.com/${key}`; + } + + /** Verify credentials and bucket access (lightweight HEAD on the bucket) */ + async testConnection(): Promise<{ ok: boolean; error?: string }> { + try { + const url = this.bucketUrl(); + const now = new Date(); + const datetime = isoDatetime(now); + const date = datetime.slice(0, 8); + const payloadHash = await sha256hex(""); + + const headers: Record = { + "x-amz-content-sha256": payloadHash, + "x-amz-date": datetime, + Host: new URL(url).host, + }; + const authorization = await this.buildAuthorization( + "HEAD", + new URL(url), + headers, + payloadHash, + datetime, + date, + ); + headers["Authorization"] = authorization; + delete headers["Host"]; + + const response = await fetch(url, { method: "HEAD", headers }); + if (response.ok || response.status === 403) { + // 403 = bucket exists but no ListBucket permission — credentials are valid + return { ok: true }; + } + return { + ok: false, + error: `HTTP ${response.status}: ${response.statusText}`, + }; + } catch (err) { + return { + ok: false, + error: err instanceof Error ? err.message : String(err), + }; + } + } + + private bucketUrl(): string { + if (this.config.endpoint) { + const base = this.config.endpoint.replace(/\/$/, ""); + return `${base}/${this.config.bucket}`; + } + return `https://${this.config.bucket}.s3.${this.config.region}.amazonaws.com`; + } +} + +// ---- SigV4 URI encoding ---- + +/** Encode a single URI component per SigV4: only A-Za-z0-9 - _ . ~ are unreserved. */ +function sigV4Encode(str: string): string { + return encodeURIComponent(str) + .replace(/!/g, "%21") + .replace(/'/g, "%27") + .replace(/\(/g, "%28") + .replace(/\)/g, "%29") + .replace(/\*/g, "%2A"); +} + +/** Encode a full path per SigV4, preserving '/' separators. */ +function sigV4EncodePath(rawPath: string): string { + return rawPath.split("/").map(sigV4Encode).join("/"); +} + +// ---- Crypto helpers ---- + +function buf2hex(buffer: ArrayBuffer): string { + return Array.from(new Uint8Array(buffer)) + .map((b) => b.toString(16).padStart(2, "0")) + .join(""); +} + +async function sha256hex(data: string | ArrayBuffer): Promise { + const buf: BufferSource = + typeof data === "string" ? new TextEncoder().encode(data) : data; + return buf2hex(await crypto.subtle.digest("SHA-256", buf)); +} + +async function hmacSha256( + key: BufferSource, + data: string, +): Promise { + const cryptoKey = await crypto.subtle.importKey( + "raw", + key, + { name: "HMAC", hash: "SHA-256" }, + false, + ["sign"], + ); + return crypto.subtle.sign("HMAC", cryptoKey, new TextEncoder().encode(data)); +} + +async function deriveSigningKey( + secretKey: string, + date: string, + region: string, + service: string, +): Promise { + const kDate = await hmacSha256( + new TextEncoder().encode(`AWS4${secretKey}`), + date, + ); + const kRegion = await hmacSha256(kDate, region); + const kService = await hmacSha256(kRegion, service); + return hmacSha256(kService, "aws4_request"); +} + +function isoDatetime(d: Date): string { + return d.toISOString().replace(/[-:]/g, "").slice(0, 15) + "Z"; +} diff --git a/src/core/cloud/upload-manager.ts b/src/core/cloud/upload-manager.ts index e168b13..c8b508e 100644 --- a/src/core/cloud/upload-manager.ts +++ b/src/core/cloud/upload-manager.ts @@ -1,86 +1,128 @@ /** - * Cloud upload orchestration + * Cloud upload orchestration — provider-agnostic registry. */ -import { GoogleDriveClient, UploadResult } from "./google-drive"; -import { DownloadState, DownloadStage } from "../types"; +import { GoogleDriveClient } from "./google-drive"; +import { S3Client } from "./s3-client"; +import { BaseCloudProvider, ProgressCallback } from "./base-cloud-provider"; +import { DownloadState, DownloadStage, StorageConfig } from "../types"; import { UploadError } from "../utils/errors"; import { logger } from "../utils/logger"; -import { StorageConfig } from "../types"; +import { CloudProvider } from "../../shared/messages"; + +export interface CloudLinks { + googleDrive?: string; // webViewLink + s3?: string; // object URL +} export interface UploadManagerOptions { - config?: StorageConfig; - onProgress?: (state: DownloadState) => void; + config: StorageConfig; + onProgress?: (uploadedBytes: number, totalBytes: number) => void; + onStateUpdate?: (state: DownloadState) => Promise; } export class UploadManager { - private googleDrive?: GoogleDriveClient; - private onProgress?: (state: DownloadState) => void; - - constructor(options: UploadManagerOptions = {}) { - if (options.config?.googleDrive?.enabled) { - this.googleDrive = new GoogleDriveClient({ - targetFolderId: options.config.googleDrive.targetFolderId, - createFolderIfNotExists: - options.config.googleDrive.createFolderIfNotExists, - folderName: options.config.googleDrive.folderName, + private readonly providers = new Map(); + private readonly onProgress?: (uploaded: number, total: number) => void; + private readonly onStateUpdate?: (state: DownloadState) => Promise; + + constructor(options: UploadManagerOptions) { + const { config } = options; + this.onProgress = options.onProgress; + this.onStateUpdate = options.onStateUpdate; + + if (config.googleDrive?.enabled) { + const drive = new GoogleDriveClient({ + targetFolderId: config.googleDrive.targetFolderId, + createFolderIfNotExists: config.googleDrive.createFolderIfNotExists, + folderName: config.googleDrive.folderName, }); + this.providers.set(drive.id, drive); } - this.onProgress = options.onProgress; + if ( + config.s3?.enabled && + config.s3.bucket && + config.s3.region && + config.s3.accessKeyId && + config.s3.secretAccessKey + ) { + const s3 = new S3Client({ + bucket: config.s3.bucket, + region: config.s3.region, + accessKeyId: config.s3.accessKeyId, + secretAccessKey: config.s3.secretAccessKey, + endpoint: config.s3.endpoint, + prefix: config.s3.prefix, + }); + this.providers.set(s3.id, s3); + } } /** - * Upload file to configured cloud storage + * Fetch the blob from a blob URL and upload to the chosen provider. + * Must be called BEFORE the blob URL is revoked. */ - async uploadFile( - blob: Blob, + async uploadFromBlobUrl( + blobUrl: string, filename: string, - downloadState?: DownloadState, - ): Promise { - if (!this.googleDrive) { - logger.warn("Google Drive not configured"); - return null; + downloadState: DownloadState, + provider: CloudProvider, + ): Promise { + if (!this.isConfigured()) { + return {}; } - try { - if (downloadState) { - downloadState.progress.stage = DownloadStage.UPLOADING; - downloadState.progress.message = "Uploading to Google Drive..."; - this.onProgress?.(downloadState); - } + const response = await fetch(blobUrl); + if (!response.ok) { + throw new UploadError(`Failed to read blob for upload: ${response.statusText}`); + } + const blob = await response.blob(); - const result = await this.googleDrive.uploadFile(blob, filename); + return this.uploadBlob(blob, filename, downloadState, provider); + } - if (downloadState) { - downloadState.cloudId = result.fileId; - downloadState.progress.stage = DownloadStage.COMPLETED; - downloadState.progress.message = "Upload completed"; - this.onProgress?.(downloadState); - } + /** + * Upload an already-fetched blob to a single chosen provider. + */ + async uploadBlob( + blob: Blob, + filename: string, + downloadState: DownloadState, + provider: CloudProvider, + signal?: AbortSignal, + ): Promise { + const client = this.providers.get(provider); + if (!client) { + throw new UploadError(`Provider "${provider}" is not configured`); + } - logger.info(`File uploaded successfully: ${result.fileId}`); - return result; - } catch (error) { - logger.error("Upload failed:", error); + // Notify UPLOADING stage + downloadState.progress.stage = DownloadStage.UPLOADING; + downloadState.progress.message = "Uploading to cloud..."; + downloadState.progress.percentage = 0; + await this.onStateUpdate?.(downloadState); - if (downloadState) { - downloadState.progress.stage = DownloadStage.FAILED; - downloadState.progress.error = - error instanceof Error ? error.message : String(error); - this.onProgress?.(downloadState); + const onProgress: ProgressCallback = (uploaded, total) => { + this.onProgress?.(uploaded, total); + const pct = total > 0 ? Math.round((uploaded / total) * 100) : 0; + if (downloadState.progress.percentage !== pct) { + downloadState.progress.percentage = pct; + downloadState.progress.message = `Uploading... ${pct}%`; + // Fire-and-forget — don't block the upload data flow + this.onStateUpdate?.(downloadState); } + }; + const url = await client.upload(blob, filename, onProgress, signal); - throw error instanceof UploadError - ? error - : new UploadError(`Upload failed: ${error}`); - } + const links: CloudLinks = {}; + links[provider] = url; + logger.info(`${provider} upload complete: ${url}`); + + return links; } - /** - * Check if upload is configured - */ isConfigured(): boolean { - return this.googleDrive !== undefined; + return this.providers.size > 0; } } diff --git a/src/core/downloader/base-playlist-handler.ts b/src/core/downloader/base-playlist-handler.ts index 94d8393..43a7a5e 100644 --- a/src/core/downloader/base-playlist-handler.ts +++ b/src/core/downloader/base-playlist-handler.ts @@ -64,6 +64,7 @@ export abstract class BasePlaylistHandler { protected readonly maxPollIntervalMs: number; protected readonly pollFraction: number; + protected downloadId: string = ""; protected bytesDownloaded: number = 0; protected totalBytes: number = 0; diff --git a/src/core/downloader/download-manager.ts b/src/core/downloader/download-manager.ts index 2d8dca0..706f54b 100644 --- a/src/core/downloader/download-manager.ts +++ b/src/core/downloader/download-manager.ts @@ -29,9 +29,6 @@ export interface DownloadManagerOptions { /** Optional callback for download progress updates */ onProgress?: DownloadProgressCallback; - /** Whether to upload completed downloads to Google Drive @default false */ - uploadToDrive?: boolean; - /** FFmpeg processing timeout in milliseconds @default 900000 (15 minutes) */ ffmpegTimeout?: number; @@ -70,7 +67,6 @@ export interface DownloadManagerOptions { export class DownloadManager { private readonly maxConcurrent: number; private readonly onProgress?: DownloadProgressCallback; - private readonly uploadToDrive: boolean; private readonly directDownloadHandler: DirectDownloadHandler; private readonly hlsDownloadHandler: HlsDownloadHandler; private readonly m3u8DownloadHandler: M3u8DownloadHandler; @@ -83,7 +79,6 @@ export class DownloadManager { constructor(options: DownloadManagerOptions = {}) { this.maxConcurrent = options.maxConcurrent || DEFAULT_MAX_CONCURRENT; this.onProgress = options.onProgress; - this.uploadToDrive = options.uploadToDrive || false; const ffmpegTimeout = options.ffmpegTimeout || DEFAULT_FFMPEG_TIMEOUT_MS; const sharedOptions = { diff --git a/src/core/storage/secure-storage.ts b/src/core/storage/secure-storage.ts new file mode 100644 index 0000000..cec5d33 --- /dev/null +++ b/src/core/storage/secure-storage.ts @@ -0,0 +1,92 @@ +/** + * Passphrase-based AES-GCM encryption for secrets stored in chrome.storage.local. + * + * Key derivation: PBKDF2 (SHA-256, 100 000 iterations) → AES-GCM 256-bit key. + * Passphrase is cached in chrome.storage.session so the user only enters it once + * per browser session (session storage is cleared on browser close). + * + * Zero external dependencies — uses only the Web Crypto API. + */ + +const PBKDF2_ITERATIONS = 100_000; +const SESSION_KEY = "s3_passphrase"; + +export interface EncryptedBlob { + encrypted: string; // base64 ciphertext + iv: string; // base64 IV (12 bytes) + salt: string; // base64 PBKDF2 salt (16 bytes) +} + +function toBase64(buf: ArrayBuffer): string { + return btoa(String.fromCharCode(...new Uint8Array(buf))); +} + +function fromBase64(b64: string): Uint8Array { + const binary = atob(b64); + const buf = new Uint8Array(binary.length); + for (let i = 0; i < binary.length; i++) buf[i] = binary.charCodeAt(i); + return buf; +} + +async function deriveKey(passphrase: string, salt: Uint8Array): Promise { + const enc = new TextEncoder(); + const keyMaterial = await crypto.subtle.importKey( + "raw", + enc.encode(passphrase), + "PBKDF2", + false, + ["deriveKey"], + ); + return crypto.subtle.deriveKey( + { name: "PBKDF2", salt: salt as BufferSource, iterations: PBKDF2_ITERATIONS, hash: "SHA-256" }, + keyMaterial, + { name: "AES-GCM", length: 256 }, + false, + ["encrypt", "decrypt"], + ); +} + +export class SecureStorage { + static async encrypt(plaintext: string, passphrase: string): Promise { + const salt = crypto.getRandomValues(new Uint8Array(16)) as Uint8Array; + const iv = crypto.getRandomValues(new Uint8Array(12)) as Uint8Array; + const key = await deriveKey(passphrase, salt); + const enc = new TextEncoder(); + const ciphertext = await crypto.subtle.encrypt( + { name: "AES-GCM", iv: iv.buffer }, + key, + enc.encode(plaintext), + ); + return { + encrypted: toBase64(ciphertext), + iv: toBase64(iv.buffer), + salt: toBase64(salt.buffer), + }; + } + + static async decrypt(blob: EncryptedBlob, passphrase: string): Promise { + const salt = fromBase64(blob.salt); + const iv = fromBase64(blob.iv); + const ciphertext = fromBase64(blob.encrypted); + const key = await deriveKey(passphrase, salt); + const plaintext = await crypto.subtle.decrypt( + { name: "AES-GCM", iv: iv.buffer }, + key, + ciphertext.buffer, + ); + return new TextDecoder().decode(plaintext); + } + + static async setPassphrase(passphrase: string): Promise { + await chrome.storage.session.set({ [SESSION_KEY]: passphrase }); + } + + static async getPassphrase(): Promise { + const result = await chrome.storage.session.get(SESSION_KEY); + return (result[SESSION_KEY] as string) ?? null; + } + + static async clearPassphrase(): Promise { + await chrome.storage.session.remove(SESSION_KEY); + } +} diff --git a/src/core/storage/settings.ts b/src/core/storage/settings.ts index 0d25cc3..97725db 100644 --- a/src/core/storage/settings.ts +++ b/src/core/storage/settings.ts @@ -6,7 +6,7 @@ * need null-checks or scattered `?? DEFAULT_X` fallbacks. */ -import { StorageConfig } from "../types"; +import { StorageConfig, EncryptedBlob } from "../types"; import { ChromeStorage } from "./chrome-storage"; import { DEFAULT_MAX_CONCURRENT, @@ -44,6 +44,7 @@ export interface AppSettings { endpoint?: string; accessKeyId?: string; secretAccessKey?: string; + secretKeyEncrypted?: EncryptedBlob; prefix?: string; }; @@ -91,6 +92,7 @@ export async function loadSettings(): Promise { endpoint: raw?.s3?.endpoint, accessKeyId: raw?.s3?.accessKeyId, secretAccessKey: raw?.s3?.secretAccessKey, + secretKeyEncrypted: raw?.s3?.secretKeyEncrypted, prefix: raw?.s3?.prefix, }, diff --git a/src/core/types/index.ts b/src/core/types/index.ts index 60e3645..37cf396 100644 --- a/src/core/types/index.ts +++ b/src/core/types/index.ts @@ -74,12 +74,23 @@ export interface DownloadState { progress: DownloadProgress; localPath?: string; cloudId?: string; + cloudLinks?: { + googleDrive?: string; // webViewLink + s3?: string; // public URL or s3:// URI + }; + uploadError?: string; // last upload failure message isManual?: boolean; // Indicates if download was started from manual/manifest tab chromeDownloadId?: number; // Chrome downloads API ID for reliable cancellation (only set when Chrome API is used) createdAt: number; updatedAt: number; } +export interface EncryptedBlob { + encrypted: string; // base64 ciphertext + iv: string; // base64 IV + salt: string; // base64 PBKDF2 salt +} + export interface StorageConfig { googleDrive?: { enabled: boolean; @@ -96,7 +107,8 @@ export interface StorageConfig { region?: string; endpoint?: string; // For S3-compatible providers (Cloudflare R2, Backblaze, etc.) accessKeyId?: string; - secretAccessKey?: string; + secretAccessKey?: string; // plaintext fallback (no passphrase set) + secretKeyEncrypted?: EncryptedBlob; // AES-GCM encrypted secret key (passphrase mode) prefix?: string; }; recording?: { diff --git a/src/core/utils/blob-utils.ts b/src/core/utils/blob-utils.ts index e35a9ff..81be384 100644 --- a/src/core/utils/blob-utils.ts +++ b/src/core/utils/blob-utils.ts @@ -74,12 +74,15 @@ export async function saveBlobUrlToFile( if (delta.state.current === "complete") { clearTimeout(timeoutId); chrome.downloads.onChanged.removeListener(onChange); - revokeBlobUrl(blobUrl); - // Retrieve filename from the completed download - chrome.downloads.search({ id: downloadId }, (results) => { - const item = results?.[0]; - resolve(item?.filename || filename); - }); + + const finish = async () => { + revokeBlobUrl(blobUrl); + chrome.downloads.search({ id: downloadId }, (results) => { + const item = results?.[0]; + resolve(item?.filename || filename); + }); + }; + finish(); } else if (delta.state.current === "interrupted") { clearTimeout(timeoutId); chrome.downloads.onChanged.removeListener(onChange); diff --git a/src/options/options.html b/src/options/options.html index 808d231..5e15eaf 100644 --- a/src/options/options.html +++ b/src/options/options.html @@ -399,41 +399,20 @@ } - /* ---- Provider Tabs ---- */ - .provider-tabs { - display: flex; - gap: 4px; - padding: 4px; - background: var(--surface-2); - border-radius: var(--radius-sm); - margin-bottom: 20px; - width: fit-content; - } - - .provider-tab { - padding: 6px 14px; - border: none; - border-radius: calc(var(--radius-sm) - 2px); - font-family: inherit; - font-size: 13px; - font-weight: 500; - cursor: pointer; - background: transparent; - color: var(--text-secondary); - transition: all 0.15s; + /* ---- Provider / Advanced Panels ---- */ + .provider-panel { + display: none; } - .provider-tab.active { - background: var(--surface-1); - color: var(--text-primary); - box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2); + .provider-panel.active { + display: block; } - .provider-panel { + .advanced-panel { display: none; } - .provider-panel.active { + .advanced-panel.active { display: block; } @@ -680,6 +659,26 @@ color: var(--text-primary); } + .history-cancel-upload { + display: flex; + align-items: center; + justify-content: center; + width: 28px; + height: 28px; + border-radius: var(--radius-sm); + border: 1px solid transparent; + background: transparent; + color: var(--text-secondary); + cursor: pointer; + padding: 0; + transition: background 0.15s, color 0.15s; + } + + .history-cancel-upload:hover { + background: var(--surface-2); + color: var(--danger, #f38ba8); + } + .history-menu { display: none; position: absolute; @@ -909,6 +908,15 @@ Advanced + + -
- - +
+ +
@@ -998,10 +1006,27 @@

Cloud Providers

Enable Google Drive uploads -
Automatically upload downloaded videos to Google Drive.
+
Connect Google Drive to upload downloaded videos.