A full-stack application for managing and chatting with multiple LLM providers (Ollama, Claude, OpenAI, OpenRouter, Gemini). Users can store API keys, chat with different models, and export their conversations.
- π User Authentication: Register and login with secure JWT authentication
- π API Key Management: Store and manage API keys for multiple LLM providers
- π¬ Multi-Provider Chat: Chat with Ollama, Claude, OpenAI, OpenRouter, and Gemini
- π Chat History: All conversations are stored per user
- π₯ CSV Export: Export your chat history as CSV
- π¨ Modern UI: Black and white theme with Geist Mono font
- π One-Click Deployment: Easy setup for any platform
- FastAPI - Modern Python web framework
- SQLite - Lightweight database
- SQLAlchemy - ORM for database operations
- JWT - Authentication tokens
- UV - Fast Python package manager
- React 19 - Latest React version
- TypeScript - Type-safe development
- Vite - Fast build tool
- Bun - Fast JavaScript runtime and package manager
- Axios - HTTP client
llm-chat-app/
βββ backend/
β βββ app/
β β βββ core/ # Configuration and database
β β βββ models/ # SQLAlchemy models
β β βββ schemas/ # Pydantic schemas
β β βββ routers/ # API routes
β β βββ services/ # Business logic
β β βββ main.py # FastAPI app
β βββ database/
β β βββ schema.sql # SQL schema
β βββ pyproject.toml # Python dependencies
βββ frontend/
β βββ src/
β β βββ components/ # React components
β β βββ contexts/ # React contexts
β β βββ pages/ # Page components
β β βββ services/ # API services
β β βββ types.ts # TypeScript types
β βββ package.json # Node dependencies
βββ deploy.sh # Linux/Mac deployment script
βββ deploy.bat # Windows deployment script
βββ README.md
- Python 3.10+
- Node.js 18+ (or Bun)
- UV (will be installed automatically if missing)
- Bun (will be installed automatically if missing)
./deploy.shdeploy.batThe script will:
- Install UV and Bun if needed
- Set up the backend with all dependencies
- Initialize the SQLite database
- Start the backend server (port 8000)
- Set up the frontend
- Start the frontend server (port 5173)
cd backend
# Install UV if not already installed
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies
uv pip install -e .
# Initialize database
python -c "from app.core.database import init_db; init_db()"
# Run server
uvicorn app.main:app --reloadcd frontend
# Install Bun if not already installed
curl -fsSL https://bun.sh/install | bash
# Install dependencies
bun install
# Run dev server
bun run dev- Register/Login: Create an account or login
- Add API Keys: Go to the API Keys tab and add your API keys for the providers you want to use
- Start Chatting: Select a provider and model, then start asking questions
- Export Chats: Click "Export CSV" to download your chat history
POST /api/auth/register- Register a new userPOST /api/auth/login- Login and get tokenGET /api/auth/me- Get current user info
GET /api/api-keys- Get all API keysPOST /api/api-keys- Create a new API keyDELETE /api/api-keys/{id}- Delete an API key
GET /api/chats- Get user's chatsPOST /api/chats- Create a new chatGET /api/chats/{id}- Get a specific chatDELETE /api/chats/{id}- Delete a chat
GET /api/export/chats/csv- Export chats as CSV
The SQL schema is defined in backend/database/schema.sql:
- users: User accounts
- api_keys: Shared API keys (all users can use)
- chats: User-specific chat history
Create a .env file in the backend directory (optional):
DATABASE_URL=sqlite:///./llm_chat.db
SECRET_KEY=your-secret-key-change-in-production
ACCESS_TOKEN_EXPIRE_MINUTES=30
CORS_ORIGINS=["http://localhost:5173"]- Ollama: Local models (default: llama3)
- OpenAI: GPT models (default: gpt-4)
- Claude: Anthropic models (default: claude-3-5-sonnet-20241022)
- OpenRouter: Multiple models (default: openai/gpt-4)
- Gemini: Google models (default: gemini-pro)
cd backend
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000API documentation available at: http://localhost:8000/docs
cd frontend
bun run devFrontend available at: http://localhost:5173
MIT