AI-powered fitness form coaching that analyzes workout videos and delivers precise, encouraging cues to improve technique and prevent injuries. Built with ADK for speed, reliability, and extensibility β optimized for a great hackathon demo.
Biome is your AI form coach. Upload a short workout video; in seconds, Biome flags issues (with frame numbers), scores your form, and suggests focused corrections. Itβs like having a coach in your pocket.
Most people train without a coach. Poor form causes plateaus and injuries, and current tools donβt give actionable, frame-specific guidance.
Biome extracts pose landmarks, computes key joint angles, and produces specific cues and recommendations mapped to severity β fast enough for live demos.
- Frame-specific coaching cues tied to quantified joint-angle metrics
- Lightweight, local-first demo flow that works reliably at hackathons
- Schema-first design for results, issues, metrics, and user progress
- Video upload workflow: Validates and stores videos locally in
uploads/ - Session tracking: Persists analysis sessions, statuses, and results in PostgreSQL
- Form insights: Stores issues, metrics, strengths, and recommendations per session
- Hackathon-friendly: Runs locally with sensible defaults; simple demo flow
- Innovation: Pose-driven analytics with actionable, timestamped cues; schema designed for longitudinal progress.
- Technical Execution: ADK agent pattern, modular tools (
upload_video,extract_pose_landmarks), clean DB layer, reproducible setup. - Impact: Injury prevention, skill progression, and accessibility of coaching.
- User Experience: Simple upload β immediate feedback; clear next steps.
- Feasibility/Scalability: Extensible to mobile capture, cloud inference, and team dashboards.
- Framework: ADK (Agent Development Kit)
- Model:
gemini-2.0-flash - Language: Python 3.11+
- Database: PostgreSQL
This project targets the Cloud Run Hackathonβs AI Agents category and aligns with the listed requirements.
- Category: AI Agents β built with Google ADK and deployable to Cloud Run
- Cloud Run requirement: Runs as a Cloud Run Service (HTTP). You may also add a Cloud Run Job for batch analysis.
- Google AI model: Uses Gemini (
gemini-2.0-flash). - Submission assets: Hosted URL, 3βminute demo video, public repository, architecture diagram, English documentation.
- Contest period: Opens Oct 6, 2025; closes Nov 10, 2025 at 5:00 PM PT
- Judging: Nov 10 β Dec 5, 2025
- Winners announced: ~Dec 12, 2025
Prereqs: gcloud CLI, a Google Cloud project, and Artifact Registry enabled.
# Build container image
gcloud builds submit --tag "gcr.io/$GOOGLE_CLOUD_PROJECT/biome-agent"
# Deploy to Cloud Run (Service)
gcloud run deploy biome-agent \
--image "gcr.io/$GOOGLE_CLOUD_PROJECT/biome-agent" \
--platform managed \
--region us-central1 \
--allow-unauthenticated \
--port 8080
# Set required env vars
gcloud run services update biome-agent \
--region us-central1 \
--update-env-vars DATABASE_URL="postgresql://..."Notes:
- For HTTP serving, expose a small FastAPI/ASGI entrypoint or use
adk api_server biome_coaching_agentwrapped by a server in the container. Ensure the container listens on$PORT. - For batch processing, optionally add a Cloud Run Job that triggers
extract_pose_landmarkson queued sessions.
- Hosted URL to the running service (include in Devpost)
- Architecture diagram (add to repo:
docs/architecture.pngand reference link below) - 3βminute demo video (YouTube/Vimeo, public, English or with English subtitles)
- Public GitHub repository
- Text description: features, tech used, data sources, findings/learnings
- Testing access: if private routes exist, include credentials in submission
- Declare thirdβparty integrations (MediaPipe/OpenCV)
Architecture diagram placeholder: docs/architecture.png (add before submission)
root_agentorchestrates a two-step tool flow: upload β pose extraction.upload_videovalidates and stores a file, creates ananalysis_sessionsrow.extract_pose_landmarksuses MediaPipe + OpenCV to compute joint angles, aggregates metrics, and returns per-frame data.- PostgreSQL stores sessions, results, and insights defined in
schema.sql.
biome_coaching_agent/agent.py: Root ADK agent (root_agent) and tool wiringbiome_coaching_agent/tools/upload_video.py: Video upload + session creationdb/connection.py: PostgreSQL connection utilitiesschema.sql: Full database schema + seed data
- Python 3.11+
- PostgreSQL 14+ running locally
- Windows PowerShell (this repo tested on Windows 10+)
Optional but recommended (per ADK docs):
uvpackage manager for fast env + dependency management
git clone https://github.com/<your-org>/biome_agent.git
cd biome_agent
# 1. Backend Setup
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txt
# 2. Database Setup
psql -U postgres -c "CREATE DATABASE biome_coaching;"
psql -U postgres -d biome_coaching -f schema.sql
# 3. Environment Configuration
$env:GOOGLE_API_KEY = "your-gemini-api-key"
$env:DATABASE_URL = "postgresql://postgres:postgres@localhost:5432/biome_coaching"
# 4. Frontend Setup (separate terminal)
npm install
# 5. Start Backend (Terminal 1)
python api_server.py
# Backend runs on http://localhost:8000
# 6. Start Frontend (Terminal 2)
npm start
# Frontend runs on http://localhost:3000Or use the convenience scripts:
# Terminal 1
.\start_backend.ps1
# Terminal 2
.\start_frontend.ps1- β Beautiful React UI with webcam recording
- β Real AI-powered analysis using MediaPipe + Gemini
- β Complete workflow: Upload β Analyze β Results
- β Database persistence with PostgreSQL
- β Real-time progress tracking
git clone https://github.com/<your-org>/biome_agent.git
cd biome_agentUsing uv (recommended):
uv venv --python "python3.11" .venv
.venv\Scripts\activate # Windows PowerShell
uv pip install -U pip
uv pip install -r requirements.txt # if you add oneOr with plain venv/pip:
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -U pip
pip install -r requirements.txtCreate a database and apply schema:
# In psql:
CREATE DATABASE biome_coaching;
\c biome_coaching;
\i schema.sql;Configure connection (optional). By default, the app uses:
postgresql://postgres:postgres@localhost:5432/biome_coaching
Override via environment variable:
$env:DATABASE_URL = "postgresql://<user>:<pass>@<host>:<port>/biome_coaching"Best for demos, testing, and development:
# Terminal 1 - Backend
python api_server.py
# Terminal 2 - Frontend
npm startThen visit: http://localhost:3000
This project follows ADK's agent discovery convention. The root agent lives at biome_coaching_agent/agent.py and exports root_agent.
Common options:
- ADK Web UI
adk web biome_coaching_agent- CLI interactive
adk run biome_coaching_agent- API server (basic)
adk api_server biome_coaching_agentNote: The custom api_server.py is preferred over adk api_server for frontend integration as it includes video upload endpoints and CORS configuration.
- Start the agent (Web UI or CLI)
- Upload a small test video (β€100MB; supported: .mp4, .mov, .avi, .webm)
- The agent will:
- Store the file in
uploads/with a generated session ID - Create an
analysis_sessionsrecord with statusprocessing - Proceed with pose extraction and analysis (tooling hooks set up)
- Store the file in
- Review results: issues, metrics, strengths, recommendations are persisted for visualization
Note: The local demo uses simple file copy for uploads and assumes a reachable PostgreSQL.
- Open ADK Web:
adk web biome_coaching_agent - Show a test squat video file (10β20s)
- Upload via the agent UI; show immediate session ID response
- Trigger
extract_pose_landmarkson that session - Present outputs: total frames, key angle stats, and example cues
- Explain next-step UI to visualize frame-level issues and progress over time
DATABASE_URL: PostgreSQL connection string (optional; has a sensible default)- Upload directory: created automatically as
uploads/in project root - Max upload size: 100MB
Key tables (see schema.sql for details):
analysis_sessions: One per uploaded video/sessionanalysis_results: Overall score, timing, frame countform_issues: Issue type, severity, frame range, coaching cuemetrics: Named metric values and targetsstrengths,recommendations: Positive feedback and next steps
Quick apply from repo root:
psql -U postgres -d biome_coaching -f schema.sql- Agent defined in
biome_coaching_agent/agent.pywith tools:upload_video,extract_pose_landmarks,analyze_workout_form,save_analysis_results - Database connection helper:
db/connection.py - Custom API server:
api_server.pyprovides REST endpoints for frontend integration
- Cannot connect to DB: verify Postgres is running and
DATABASE_URLis correct. psycopginstallation issues:requirements.txtusespsycopg[binary]>=3.1.0for easier installation.- Video fails to open: confirm the file path is correct and codec is supported (mp4, mov, avi, webm).
- No person detected: try a brighter video, centered subject, and slower movement.
- Port conflicts: Backend defaults to 8080 (Cloud Run standard). For local dev, use
$env:PORT=8000.
- Problem, solution, and demo flow documented
- Local run instructions and environment variables
- Database schema and seed data included
- Clear boundaries for future improvements (pose extraction hook)
- License included
- Reduce injuries and accelerate learning for everyday athletes
- Expand to mobile capture and on-device inference for privacy
- Add visualization UI and coach-mode dashboards
- Integrate with wearables for multi-sensor feedback
- Real pose extraction pipeline integration
- Frontend for video capture and results visualization
- Model evaluation on curated movement datasets
- Mobile capture and privacy-preserving on-device inference
- Biome Team β engineering, ML, product (add names/links here)
This project is licensed under the terms of the LICENSE file in this repository.