Driver is an open-source cloud operations manager for teams that need one place to browse storage, enrich metadata, automate repetitive workflows, and observe background jobs without losing operational context.
It connects Microsoft OneDrive, Google Drive, and Dropbox behind a single workspace with FastAPI, React, Redis, and Docker Compose.
Warning: this app is intentionally vibe coded. If the vibes are good, ship it. If the vibes are cursed, check the logs first.
- Why Driver
- Highlights
- Architecture
- Screenshots
- Stack
- Quick Start
- Configuration
- Workers and Queues
- Metadata Libraries
- Documentation
- Useful Commands
- Troubleshooting
- Unify multi-provider file operations in one UI instead of juggling vendor consoles.
- Turn metadata into a first-class workflow with bulk edits, rules, and library-specific schemas.
- Run expensive operations asynchronously with queues, retries, dead-letter handling, and job visibility.
- Give operators a real control plane with dashboards, runtime settings, and AI-assisted actions.
- Unified file workspace with search, filters, bulk actions, uploads, and similar-file reports.
- Metadata libraries for comics, images, and books, with category stats, layouts, and inline editing.
- Rules engine to apply metadata, remove metadata, rename items, and move items at scale.
- AI assistant with tool traces, confirmation gates, and workspace-aware actions.
- Admin dashboard for queue depth, success rate, provider usage, throughput, latency, and dead-letter analysis.
- Admin settings for scheduler, worker runtime, AI provider mode, language, and plugin-backed metadata libraries.
- Docker Compose topology with shared backend runtime image, migration service, and a single general-purpose worker.
High-level flow:
- React + Vite frontend for operator workflows.
- FastAPI API for provider integration, metadata, rules, admin, and AI routes.
- Redis-backed ARQ workers for async jobs and retries.
- PostgreSQL-compatible database for application state.
- Image-analysis jobs can be toggled independently and are disabled by default in the sample environment.
All screenshots below were regenerated from mocked workspace data and include privacy-safe masking for identity fields.
- Backend: FastAPI + SQLAlchemy + Alembic
- Workers: ARQ + Redis
- Frontend: React + Vite + Tailwind
- Database: PostgreSQL (recommended)
- Optional extras: AI runtime integrations and a dedicated vision worker profile
Prerequisites:
- Python 3.12+
- Node.js 18+
uv- Docker Desktop for Compose-based execution
docker compose up -d --build --remove-orphansDefault behavior:
- builds the shared backend runtime image once and reuses it for API + worker
- runs database migrations in the
migrateservice before API/workers start - uses local Docker tags for dev compose and leaves release versioning to CI/CD
Services:
- Frontend:
http://localhost:5173 - Backend API:
http://localhost:8000 - OpenAPI:
http://localhost:8000/docs - Redis:
localhost:6379 - Worker:
worker
Logs:
docker compose logs -f backend
docker compose logs -f migrate
docker compose logs -f workerStop:
docker compose downInstall dependencies once:
uv sync --project src/backend
npm.cmd --prefix src/frontend ci --workspaces=falseRun backend + frontend in one terminal:
uv run scripts/dev.pyBy default this launcher starts:
- backend API
- Vite frontend
worker
Redis still needs to be running separately.
Useful flags:
uv run scripts/dev.py --skip-migrate
uv run scripts/dev.py --skip-workers
uv run scripts/dev.py --with-scheduler
uv run scripts/dev.py --backend-port 8001 --frontend-port 5174
uv run scripts/dev.py --dry-runManual backend (PowerShell, from repo root):
uv sync --project src/backend
$env:PYTHONPATH = (Resolve-Path .\src)
uv run --project src/backend alembic -c src/alembic.ini upgrade head
uv run --project src/backend uvicorn backend.main:app --reload --host 0.0.0.0 --port 8000Manual frontend:
npm.cmd --prefix src/frontend ci --workspaces=false
npm.cmd --prefix src/frontend run dev --workspaces=falseWorkers via Docker Compose:
docker compose up -d worker
docker compose logs -f worker
docker compose stop workerDedicated scheduler process:
uv run python -m backend.workers.scheduler_worker- Copy
env.exampleto.env. - Fill in OAuth credentials and secrets.
- Set
DATABASE_URLandREDIS_URL.
SECRET_KEYENCRYPTION_KEYDATABASE_URLREDIS_URL
Microsoft provider:
MS_CLIENT_IDMS_CLIENT_SECRETMS_REDIRECT_URI
Google provider:
GOOGLE_CLIENT_IDGOOGLE_CLIENT_SECRETGOOGLE_REDIRECT_URI
Dropbox provider:
DROPBOX_CLIENT_IDDROPBOX_CLIENT_SECRETDROPBOX_REDIRECT_URI
You can run with only Microsoft, only Google, or only Dropbox.
MS_TENANT_IDdefaults tocommonREDIS_QUEUE_NAMEdefaults todriver:jobsWORKER_CONCURRENCYWORKER_JOB_TIMEOUT_SECONDSDB_POOL_MODEDB_POOL_SIZEDB_MAX_OVERFLOWENABLE_DAILY_SYNC_SCHEDULERRUN_SCHEDULER_IN_APISCHEDULER_DISTRIBUTED_LOCK_ENABLEDSCHEDULER_LOCK_KEYSCHEDULER_LOCK_TTL_SECONDSDAILY_SYNC_CRON- comics and metadata-library vars under
COMIC_*when you enable those workflows - AI runtime settings and remote provider vars for the assistant features
- Microsoft Entra app registration:
https://learn.microsoft.com/entra/identity-platform/quickstart-register-app - Microsoft Graph permissions reference:
https://learn.microsoft.com/graph/permissions-reference - Google OAuth consent screen:
https://developers.google.com/workspace/guides/configure-oauth-consent - Google OAuth 2.0 for web server apps:
https://developers.google.com/identity/protocols/oauth2/web-server - Dropbox OAuth guide:
https://developers.dropbox.com/oauth-guide - Dropbox app console:
https://www.dropbox.com/developers/apps
Current strategy:
- one shared queue backed by one worker service with higher concurrency
- sync, metadata, rules, IO, and comics jobs all resolve to
driver:jobs - image-analysis jobs are intentionally disabled while the vision path is being reworked
Current Compose profile:
worker:WORKER_CONCURRENCY=6,DB_POOL_MODE=null- backend API:
DB_POOL_MODE=null
Container and build notes:
- backend and worker share the same
driver-backend:localimage - the backend Docker build ignores
src/frontendentirely to cut build context size - the frontend image is tagged as
driver-frontend:local - the frontend image copies only the files required for the Vite production build
For managed poolers such as Supabase Session mode, prefer DB_POOL_MODE=null so each process does not stack its own large SQLAlchemy pool on top of the external pooler.
Driver ships with library-oriented workflows instead of treating every file the same way.
comics_core: archive extraction, issue-centric metadata, cover indexing, and library jobsimages_core: image analysis, image tagging, and media-oriented enrichmentbooks_core: book metadata mapping and library-scale indexing
You can keep using Driver as a general cloud operations tool, or lean into the library features as your catalog grows.
- API docs index
- Complete backend endpoint map
- OpenAPI UI at
http://localhost:8000/docs
Backend:
uv run pytest -q
uv run --project src/backend pytest --cov=src/backend --cov-report=xmlFrontend:
cd src/frontend
npm.cmd run lint --workspaces=false
npm.cmd run build --workspaces=false
npm.cmd run test --workspaces=false
npm.cmd run coverage --workspaces=falseFull coverage check:
uv run python scripts/run_coverage.pyImportant endpoints:
- Health:
http://localhost:8000/health - OpenAPI:
http://localhost:8000/docs - Admin settings API:
GET/PUT /api/v1/admin/settings - Admin observability API:
GET /api/v1/admin/observability
Use npx.cmd and npm.cmd:
npx.cmd --version
npm.cmd run dev --workspaces=falsedocker compose up -d --build --remove-orphansChecklist:
- Redis is running:
docker compose logs -f redis - the worker has the correct
WORKER_QUEUE_NAME - the backend queue policy resolves jobs to
driver:jobs - migrations completed successfully in the
migrateservice





