Skip to content

Latest commit

 

History

History
70 lines (61 loc) · 2.96 KB

File metadata and controls

70 lines (61 loc) · 2.96 KB

FastAPI Backend for Code Analysis Agent

This Python FastAPI service provides REST endpoints for:

  • Analyzing uploaded Python code
  • Generating narration using Ollama models

Setup

  1. Install Python 3.12+

  2. Quick Start: Run startup.sh from the project root - it will:

    • Automatically create .env file from sample.env.txt (if it doesn't exist)
    • Set up virtual environment and install dependencies
    • Start the backend service
  3. Manual Setup (if not using startup.sh):

    • Create a virtual environment and install dependencies:
    uv venv --python python3.12
    source .venv/bin/activate
    uv pip install -r requirements.txt
    # requirements.txt includes:
    # fastapi, uvicorn, requests, python-multipart, gtts, python-dotenv
    • Copy sample.env.txt to .env and customize if needed:
    cp sample.env.txt .env
  4. Install Ollama and start the Ollama server (see main README)

  5. Environment Variables (configured in .env file):

    OLLAMA_API_URL=http://localhost:11434  # Ollama API endpoint (default: http://localhost:11434)
    FRONTEND_URL=http://localhost:5173     # Frontend URL for CORS (default: http://localhost:5173)
    • The application uses python-dotenv to automatically load variables from .env file
    • Environment variables are validated at startup
    • Logging is configured to output INFO level messages with timestamps

Usage

  • Start the API server:
    uvicorn main:app --reload
  • The server runs on http://localhost:8000 by default
  • CORS middleware is configured to allow requests from FRONTEND_URL (configured in .env file)
  • Logging output will show INFO level messages including API calls and errors

API Endpoints

  • POST /analyze: Analyze code and return summary
    • Form data: code (string)
    • Returns: {"summary": "..."}
  • POST /narrate: Generate narration using Ollama
    • Form data: code (string), model (string, optional, defaults to "llama3.2:3b")
    • Returns: {"narration": "..."}
    • Also generates audio file using gTTS and saves to narration/narration.mp3
  • GET /narration-audio: Retrieve the generated narration audio file
    • Returns: MP3 audio file (or 404 if not found)

Implementation Details

  • Narration audio files are stored in the narration/ directory
  • Audio is generated using Google Text-to-Speech (gTTS)
  • The service connects to Ollama API at http://localhost:11434 (configurable via OLLAMA_API_URL environment variable)
  • Environment variables are loaded from .env file using python-dotenv
  • Environment variables are validated at startup (URL format validation)
  • CORS is configured to allow requests from FRONTEND_URL (more secure than wildcard)
  • Structured logging is configured with timestamps and log levels (INFO, WARNING, ERROR)
  • All API endpoints have comprehensive error handling with proper HTTP status codes
  • All API endpoints log their operations for debugging and monitoring

See main.py for implementation details.