FastAPI-based backend service for the Titanic Survivor Prediction Application.
# From the project root directory
docker compose -f 'compose/compose.dev.yaml' up -d --build
# Access Swagger UI
open http://localhost:8000/docsNo setup needed! The service starts with all dependencies configured.
- RESTful API with automatic OpenAPI/Swagger documentation
- JWT Authentication with role-based access control
- Async PostgreSQL integration with SQLAlchemy
- Structured logging and error handling
- Health checks and monitoring endpoints
- Auto-reload in development mode
Access the Swagger UI at: http://localhost:8000/docs
GET /health- Service health checkPOST /auth/signup- User registrationPOST /auth/login- User authentication (returns JWT token)POST /predict- Get survival predictionGET /models- List available ML models
GET /predict/history- View last 10 predictions
POST /models/train- Train new modelDELETE /models/{id}- Delete specific model
All error responses follow a consistent JSON structure and include a correlation ID for traceability.
Example error response:
{
"detail": "Validation Error: age: Input should be greater than 0",
"code": "ERR_422",
"timestamp": "2025-06-19T13:33:50.088798+00:00",
"correlation_id": "edc76065-6828-41be-80e0-2cf49587061e"
}detail: Human-readable error message.code: Machine-readable error code (e.g.,ERR_400,ERR_404,ERR_422,ERR_500).timestamp: ISO 8601 UTC timestamp of the error.correlation_id: Unique ID for tracing the request across logs and responses.
All responses (including errors) include an X-Correlation-ID header.
cd app/backend
# Install dependencies (if not already done)
uv sync --no-cache --extra dev
# Run tests
uv run pytest
# Run tests with coverage
uv run pytest --cov=. --cov-report=html
# Linting and formatting check
uv run ruff check
uv run ruff format --check
# Auto-fix formatting
uv run ruff formatbackend/
├── main.py # Application entry point
├── routers/ # API route handlers
│ ├── auth.py # Authentication endpoints
│ ├── models.py # Model management
│ └── prediction.py # Prediction endpoints
├── services/ # Business logic
│ ├── prediction_service.py
│ ├── model_service.py
│ └── user_service.py
├── db/ # Database layer
│ ├── schemas.py # SQLAlchemy models
│ └── helpers.py # DB utilities
├── models/ # Pydantic schemas
└── tests/ # Test suite
The service is pre-configured in Docker Compose. For local development:
| Variable | Description | Docker Default |
|---|---|---|
DB_USER |
Database user | backend |
DB_DATABASE |
Database name | backend |
DB_ADDRESS |
Database host | postgres |
DB_PASSWORD |
Database password | Set in compose |
JWT_SECRET_KEY |
JWT signing key | Set in compose |
MODEL_SERVICE_URL |
Model service URL | http://model:8000 |
ALLOWED_ORIGINS |
CORS origins | * |
Use pgAdmin for database exploration:
- URL: http://localhost:5050
- Email:
team@random.iceberg - Password:
Cheezus123
users- User accounts and authenticationprediction- Prediction historymodel- Trained model metadatafeature- Available ML features
- Go to http://localhost:8000/docs
- Try the
/healthendpoint first - Use
/auth/signupto create a user - Use
/auth/loginto get a JWT token - Click "Authorize" and paste the token
- Test protected endpoints
# Health check
curl http://localhost:8000/health
# Sign up
curl -X POST http://localhost:8000/auth/signup \
-H "Content-Type: application/json" \
-d '{"email": "test@example.com", "password": "password123"}'
# Login
curl -X POST http://localhost:8000/auth/login \
-H "Content-Type: application/json" \
-d '{"email": "test@example.com", "password": "password123"}'The GitLab CI pipeline automatically:
- Runs tests on every commit
- Checks code formatting with ruff
- Builds Docker image
- Pushes to registry (for main/dev branches)