| Stack: | |
| Tools: | |
| Stats: |
A FastAPI backend service for generating AI-powered French grammar quiz problems. Uses compositional prompts and multiple LLM providers to create pedagogically-focused learning content with targeted grammatical errors.
- Docker and Docker Compose
- Supabase CLI (for local development)
- LLM API key (OpenAI and/or Gemini)
Create a .env file in the project root:
# LLM Provider (required - at least one)
OPENAI_API_KEY=your_openai_api_key # Required if using OpenAI
GEMINI_API_KEY=your_gemini_api_key # Required if using Gemini
LLM_PROVIDER=gemini # Options: openai, gemini
# Supabase Configuration
SUPABASE_URL=your_supabase_url
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_key
SUPABASE_ANON_KEY=your_supabase_anon_key# 1. Start local Supabase (required - runs separately)
make start-supabase
# 2. Start the development stack
docker-compose upThis starts the following services:
| Service | Port | Description |
|---|---|---|
| FastAPI App | 8000 | API server with embedded Kafka workers |
| Kafka | 9092 | Message queue for async problem generation |
| OpenTelemetry Collector | 4317/4318 | Receives traces and metrics |
| Prometheus | 9090 | Metrics storage |
| Grafana | 3000 | Dashboards (user: lqs, pass: test) |
The service will be available at:
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
- Grafana Dashboards: http://localhost:3000
- Async problem generation via Kafka workers with status tracking
- Multi-provider LLM support for OpenAI and Google Gemini
- 107 French verbs with complete conjugation tables across major tenses
- Compositional prompt system generating targeted grammatical errors
- REST API with OpenAPI documentation and API key authentication
- Full observability with OpenTelemetry, Prometheus, and Grafana
| Document | Description |
|---|---|
| Architecture | System design, data flow, and key decisions |
| Development Guide | Development workflows and CLI reference |
| Operations Playbook | Common operations using lqs CLI |
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/api/v1/problems/grammar/random |
GET | Get a random grammar problem from the pool (LRU) with optional filters |
/api/v1/problems/{id} |
GET | Get a specific problem by ID |
/api/v1/problems/generate |
POST | Trigger async problem generation |
/api/v1/generation-requests/{id} |
GET | Check generation request status |
/api/v1/verbs/{infinitive} |
GET | Get verb details by infinitive |
/api/v1/cache/stats |
GET | View cache statistics |
Full API documentation available at /docs when running the service, or view the hosted API reference.
The lqs CLI provides direct access to core functionality:
# Initialize the database with verbs
lqs database init
# Generate problems asynchronously
lqs problem generate -c 5
# Check generation status
lqs generation status <request-id>
# Get a random grammar problem
lqs problem random grammar
# Get a random grammar problem with filters
lqs problem random grammar --focus conjugation --tenses futur_simple
# View problem with LLM reasoning trace
lqs problem get <uuid> --llm-traceSee Operations Playbook for comprehensive CLI usage.
src/
├── api/ # REST API endpoints
├── cli/ # Command-line interface (lqs)
├── clients/ # LLM clients (OpenAI, Gemini)
├── prompts/ # Compositional prompt system
├── services/ # Business logic
├── worker/ # Kafka consumers
└── main.py # Application entry
MIT License - see LICENSE