Get your app live in 15 minutes.
| Part | Where | Cost |
|---|---|---|
| Backend | Railway, Render, or any VPS | $5-20/mo |
| Frontend | Vercel | Free |
| Database | Supabase, Neon, or Railway | Free tier available |
| Redis | Upstash or Railway | Free tier available |
Pick one. All have free tiers.
- Create project at supabase.com
- Go to Settings → Database → Connection string
- Copy the
postgres://URL
- Create project at neon.tech
- Copy connection string from dashboard
- Create PostgreSQL service at railway.app
- Copy
DATABASE_URLfrom Variables tab
- Push your code to GitHub
- Create new project at railway.app
- Select "Deploy from GitHub repo"
- Set root directory to
apps/backend - Add environment variables:
DATABASE_URL=postgres://...
BETTER_AUTH_SECRET=<your-secret>
BETTER_AUTH_URL=https://<your-railway-url>
FRONTEND_URL=https://<your-vercel-url>- Railway auto-deploys on push
SSH into your server:
# Install dependencies
curl -fsSL https://get.docker.com | sh
# Clone and build
git clone <your-repo> app && cd app
docker build -t api -f apps/backend/Dockerfile .
# Run
docker run -d \
--name api \
--restart unless-stopped \
-p 9999:9999 \
-e DATABASE_URL="postgres://..." \
-e BETTER_AUTH_SECRET="..." \
-e BETTER_AUTH_URL="https://api.yourdomain.com" \
-e FRONTEND_URL="https://yourdomain.com" \
apiPoint your domain with a reverse proxy:
# Caddy (automatic HTTPS)
echo "api.yourdomain.com { reverse_proxy localhost:9999 }" | sudo tee /etc/caddy/Caddyfile
sudo systemctl reload caddy- Push to GitHub
- Import at vercel.com/new
- Set:
- Root Directory:
apps/frontend - Build Command:
cd ../.. && pnpm build:frontend - Output Directory:
dist
- Root Directory:
- Add environment variable:
VITE_API_URL=https://api.yourdomain.com
- Deploy
Vercel auto-deploys on every push.
- Create database at upstash.com
- Copy the Redis URL
- Add to backend env:
REDIS_URL=redis://...
- Add Redis service to your project
- Copy
REDIS_URLfrom Variables
- Create bucket at dash.cloudflare.com → R2
- Create API token with read/write permissions
- Add to backend env:
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...
S3_REGION=autoS3_BUCKET=your-bucket
S3_REGION=us-east-1
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...If you're using the job queue, run the worker alongside your API:
Add a second service pointing to the same repo:
- Start Command:
pnpm --filter backend jobs
docker run -d \
--name worker \
--restart unless-stopped \
-e DATABASE_URL="..." \
-e REDIS_URL="..." \
api \
node src/jobs/worker.jsDATABASE_URL=postgres://user:pass@host:5432/db
BETTER_AUTH_SECRET=<32+ random characters>
BETTER_AUTH_URL=https://api.yourdomain.com
FRONTEND_URL=https://yourdomain.com# Redis
REDIS_URL=redis://...
# File storage
S3_ENDPOINT=https://...
S3_BUCKET=uploads
S3_REGION=auto
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...
# Email
RESEND_API_KEY=re_...
# Tuning
PORT=9999
LOG_LEVEL=info
NODE_ENV=production# Health check
curl https://api.yourdomain.com/api/health
# Should return:
# {"success":true,"data":{"status":"healthy",...}}502 Bad Gateway
→ Backend isn't running. Check logs: docker logs api
CORS errors
→ Make sure FRONTEND_URL matches exactly (including https)
Auth not working
→ Check BETTER_AUTH_URL matches your API domain
Database connection refused → Whitelist your server IP in your database provider's dashboard