Skip to content

beverage/language-quiz-service

Repository files navigation

Staging Deployment Production Deployment Last Commit personal-website Buymeacoffee

Stack: Python 3.11+ FastAPI Supabase Fly.io OpenAI Gemini Kafka
Tools: Poetry Pytest Code style: ruff
Stats: Top Language Coverage

Language Quiz Service

A FastAPI backend service for generating AI-powered French grammar quiz problems. Uses compositional prompts and multiple LLM providers to create pedagogically-focused learning content with targeted grammatical errors.

Quick Start

Prerequisites

  • Docker and Docker Compose
  • Supabase CLI (for local development)
  • LLM API key (OpenAI and/or Gemini)

Environment Variables

Create a .env file in the project root:

# LLM Provider (required - at least one)
OPENAI_API_KEY=your_openai_api_key      # Required if using OpenAI
GEMINI_API_KEY=your_gemini_api_key      # Required if using Gemini
LLM_PROVIDER=gemini                      # Options: openai, gemini

# Supabase Configuration
SUPABASE_URL=your_supabase_url
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_key
SUPABASE_ANON_KEY=your_supabase_anon_key

Start the Service

# 1. Start local Supabase (required - runs separately)
make start-supabase

# 2. Start the development stack
docker-compose up

This starts the following services:

Service Port Description
FastAPI App 8000 API server with embedded Kafka workers
Kafka 9092 Message queue for async problem generation
OpenTelemetry Collector 4317/4318 Receives traces and metrics
Prometheus 9090 Metrics storage
Grafana 3000 Dashboards (user: lqs, pass: test)

The service will be available at:

Features

  • Async problem generation via Kafka workers with status tracking
  • Multi-provider LLM support for OpenAI and Google Gemini
  • 107 French verbs with complete conjugation tables across major tenses
  • Compositional prompt system generating targeted grammatical errors
  • REST API with OpenAPI documentation and API key authentication
  • Full observability with OpenTelemetry, Prometheus, and Grafana

Documentation

Document Description
Architecture System design, data flow, and key decisions
Development Guide Development workflows and CLI reference
Operations Playbook Common operations using lqs CLI

API Endpoints

Endpoint Method Description
/health GET Health check
/api/v1/problems/grammar/random GET Get a random grammar problem from the pool (LRU) with optional filters
/api/v1/problems/{id} GET Get a specific problem by ID
/api/v1/problems/generate POST Trigger async problem generation
/api/v1/generation-requests/{id} GET Check generation request status
/api/v1/verbs/{infinitive} GET Get verb details by infinitive
/api/v1/cache/stats GET View cache statistics

Full API documentation available at /docs when running the service, or view the hosted API reference.

CLI Quick Reference

The lqs CLI provides direct access to core functionality:

# Initialize the database with verbs
lqs database init

# Generate problems asynchronously
lqs problem generate -c 5

# Check generation status
lqs generation status <request-id>

# Get a random grammar problem
lqs problem random grammar

# Get a random grammar problem with filters
lqs problem random grammar --focus conjugation --tenses futur_simple

# View problem with LLM reasoning trace
lqs problem get <uuid> --llm-trace

See Operations Playbook for comprehensive CLI usage.

Project Structure

src/
├── api/           # REST API endpoints
├── cli/           # Command-line interface (lqs)
├── clients/       # LLM clients (OpenAI, Gemini)
├── prompts/       # Compositional prompt system
├── services/      # Business logic
├── worker/        # Kafka consumers
└── main.py        # Application entry

License

MIT License - see LICENSE

About

No description or website provided.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •