Skip to content

F1 telemetry analysis dashboard with FastAPI, React, and LLM-powered insights. Compare drivers, analyze braking performance, and visualize lap data.

Notifications You must be signed in to change notification settings

bordanattila/F1_Analytics_LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

F1 Analytics Dashboard 🏎️

A Formula 1 qualifying analysis dashboard powered by FastF1, with a FastAPI backend and React frontend. Inspired by AWS F1 Insights.

Python FastAPI React TypeScript

Dashboard
Dashboard Analysis Analysis animation

Features

  • Session Selection: Browse F1 sessions by year, Grand Prix, and session type
  • Driver Comparison: Compare any two drivers head-to-head
  • Telemetry Analysis:
    • Speed comparison across the lap
    • Throttle and brake traces
    • Speed delta visualization
  • Performance Stats: Full throttle %, heavy braking %, and cornering %
  • Sector Times: Side-by-side sector comparison
  • Dark F1-themed UI: Sleek design inspired by official F1 graphics

How It Works

What the LLM knows:

General F1 knowledge - Pre-trained on racing concepts, terminology, physics Context interpretation - Understands the structured data fed by the agent Natural language generation - Explains findings in human-readable coaching language Reasoning ability - Can connect patterns (e.g., "tire degradation → late braking issues → stint strategy")

Key Differences:

Aspect Agent LLM
Knowledge Type Explicit, rule-based Implicit, learned from training
Calculations Deterministic (always same output) Probabilistic (varies)
Expertise Deep + Narrow (braking telemetry) Broad + Shallow (general F1)
Reliability 100% accurate for math/thresholds Can hallucinate if not grounded
Flexibility Fixed logic Adapts to questions/context
Role "What data matters + how to compute it" "What does this mean + how to explain it"

Why This Architecture Works:

Agent ensures data quality - No hallucination in metrics LLM provides flexibility - Can answer diverse user questions Grounded AI - LLM receives structured facts, not raw telemetry Complementary - Agent = "Calculator", LLM = "Coach"

Example workflow:

User: "Why is Hamilton losing time in Turn 4?"

Agent: Extracts Turn 4 metrics →

  • brake_delta=-4.5m
  • min_speed_delta=-3km/h
  • lockup_risk=true

LLM: "Hamilton is braking 4.5m earlier than Verstappen at Turn 4, carrying 3km/h less minimum speed. This suggests he may be over-braking initially. The high brake pressure combined with lower apex speed indicates potential lockup risk. Try a later brake point with smoother initial pressure..."

Project Structure

F1_Analytics/
├── frontend/               # React + TypeScript frontend
│   ├── src/
│   │   ├── components/    # UI components
│   │   ├── services/      # API client
│   │   └── types/         # TypeScript types
│   └── package.json
│
├── backend/                # FastAPI backend
│   ├── main.py            # API entry point
│   ├── routes/            # API endpoints
│   └── services/          # Business logic
│
├── data/
│   ├── raw/               # Raw data collection from FastF1
│   │   └── collector.py   # F1DataCollector class
│   └── processed/         # LLM-ready processed data
│       └── processor.py   # F1DataProcessor class
│
├── llm/                   # LLM integration (future)
│   ├── prompts/
│   ├── agents/
│   └── models/
│
├── deploy/
│   ├── Dockerfile
│   └── k8s/
│
└── requirements.txt

Getting Started

Prerequisites

  • Python 3.9+
  • Node.js 18+
  • npm or yarn

Installation

  1. Clone the repository

    git clone <repo-url>
    cd F1_Analytics
  2. Set up Python backend

    python -m venv .venvf1
    source .venvf1/bin/activate  # Linux/Mac
    pip install -r requirements.txt
  3. Set up React frontend

    cd frontend
    npm install

Running the Application

Terminal 1 - Backend:

source .venvf1/bin/activate
uvicorn backend.main:app --reload --host 0.0.0.0 --port 8000

Terminal 2 - Frontend:

cd frontend
npm run dev

Open http://localhost:5173 in your browser.

Data Collection CLI

Collect raw session data:

python -m data.raw.collector --year 2024 --round 14 --session Q

Process data for LLM:

python -m data.processed.processor --year 2024 --round 14 --session Q

API Endpoints

Endpoint Description
GET /api/sessions/years Get available years
GET /api/sessions/events/{year} Get events for a year
GET /api/sessions/load Load a session
GET /api/drivers/ Get drivers in session
GET /api/drivers/fastest-lap Get driver's fastest lap
GET /api/drivers/stats Get driver stats
GET /api/telemetry/compare Compare two drivers

Tech Stack

  • Backend: FastAPI, FastF1, Pandas
  • Frontend: React 18, TypeScript, Recharts, Axios
  • Styling: Custom CSS with F1-inspired dark theme

Future Enhancements

  • Develop agents to enhance response (braking, cornering, tyre, etc.)
  • Track map visualization with speed heatmap
  • LLM-powered race strategy analysis
  • Historical comparisons across seasons
  • Tire degradation analysis
  • Real-time session updates

License

MIT

About

F1 telemetry analysis dashboard with FastAPI, React, and LLM-powered insights. Compare drivers, analyze braking performance, and visualize lap data.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors