AI-Docs is an advanced AI-powered chatbot that uses Retrieval-Augmented Generation (RAG) to answer user queries based on stored PDF documents. The chatbot integrates FastAPI for the backend, React (Vite) for the frontend, and is powered by Anthropic Claude.
- ChatGPT-style conversation with past chat recall.
- PDF Processing: Extracts and indexes data from uploaded PDF files.
- Real-time message storage: Saves chat history using SQLite.
- RAG-based answering: Uses ChromaDB and embedding models for accurate, document-grounded responses.
- Multi-document retrieval: Queries across multiple PDFs simultaneously using MMR search.
- Dynamic chat list: Automatically updates the sidebar with active chats.
- Chat title editing: Rename conversations directly from the sidebar.
- Delete chat support: Deletes conversations dynamically.
- Configurable settings: Switch between Claude Haiku, Sonnet, and Opus models at runtime.
- Dark UI theme for a better user experience.
- Responsive Web Design: Mobile-friendly and accessible from all screen sizes.
- Dockerized Deployment: Easily run with Docker or Docker Compose.
- Frontend: React 19 (Vite), TypeScript, TailwindCSS
- Backend: FastAPI, SQLite, ChromaDB, LangChain
- AI Model: Anthropic Claude (Haiku / Sonnet / Opus, configurable via settings)
- Embeddings: ChromaDB
DefaultEmbeddingFunction(local, sentence-transformers — no API key required)
User Question
│
▼
LangChain RAG Chain
│
├─► ChromaDB (MMR Retrieval)
│ └─ Searches across all indexed PDF chunks
│
├─► Retrieved Context (top-k relevant chunks)
│
└─► Anthropic Claude (claude-opus-4-6 / sonnet / haiku)
└─ Generates grounded answer from context
│
▼
Answer + Chat History (SQLite)
- PDFs are uploaded and chunked via
pdfplumber+RecursiveCharacterTextSplitter - Chunks are embedded locally and stored in ChromaDB (persistent vector database)
- On each query, MMR (Maximal Marginal Relevance) retrieval fetches the most relevant and diverse chunks
- LangChain passes the retrieved context + conversation history to Claude
- Claude generates a document-grounded answer — no hallucination from outside the provided PDFs
ai-docs/
├── backend/
│ ├── src/
│ │ ├── api.py # FastAPI routes
│ │ ├── chat_manager.py # Chat history management
│ │ ├── embedding.py # ChromaDB embedding storage
│ │ ├── file_manager.py # PDF upload/delete operations
│ │ ├── preprocessing.py # PDF text extraction & chunking
│ │ ├── retrieval.py # LangChain RAG chain + Claude integration
│ │ └── settings.py # Runtime settings management
│ ├── Dockerfile
│ └── requirements.txt
├── frontend/
│ ├── src/
│ │ ├── components/ # React UI components
│ │ ├── hooks/ # Custom React hooks
│ │ ├── styles/ # Global styles
│ │ └── App.tsx
│ ├── Dockerfile
│ └── package.json
├── docker-compose.build.yml
├── docker-compose.image.yml
└── deploy.sh
git clone https://github.com/erenisci/ai-docs
cd ai-docsNavigate to the backend directory and install dependencies:
cd backend
python -m venv venv
source venv/bin/activate # MacOS/Linux
venv\Scripts\activate # Windows
pip install -r requirements.txtCreate a .env file in the backend directory:
ANTHROPIC_API_KEY=your_anthropic_api_key_hereNavigate to the frontend directory and install dependencies:
cd frontend
npm installcd backend
uvicorn src.api:app --host 127.0.0.1 --port 8000 --reloadcd frontend
npm run devThe frontend will be available at http://localhost:5173
docker-compose -f docker-compose.build.yml up --build -d- Frontend: http://localhost:5173
- Backend: http://localhost:8000
docker pull erenisci/ai-docs:backend
docker pull erenisci/ai-docs:frontenddocker-compose -f docker-compose.image.yml up -ddocker run -d -p 8000:8000 erenisci/ai-docs:backend
docker run -d -p 5173:3000 erenisci/ai-docs:frontend| Method | Endpoint | Description |
|---|---|---|
POST |
/ask/ |
Sends a query to the chatbot. |
GET |
/get-chats/ |
Retrieves all stored chat sessions. |
GET |
/get-chat-history/{chat_id} |
Fetches messages from a specific chat. |
POST |
/update-chat-title/{chat_id}/{title} |
Updates the title of a specific chat. |
DELETE |
/delete-chat/{chat_id} |
Deletes a specific chat. |
GET |
/list-pdfs/ |
Lists all stored PDFs. |
POST |
/upload-pdf/ |
Uploads a PDF file for processing. |
POST |
/process-pdfs/ |
Processes all uploaded PDFs. |
DELETE |
/delete-pdf/ |
Deletes a specific PDF file. |
GET |
/get-settings/ |
Returns the current AI settings. |
POST |
/update-settings/ |
Updates AI model and runtime settings. |
- Start a new chat by clicking "New Chat".
- Upload PDFs via the PDF manager to provide document-based answers.
- Process PDFs to chunk and index them into ChromaDB.
- Ask questions in the message box — Claude will answer using only the document content.
- Switch models from the settings panel (Haiku for speed, Opus for depth).
- Review past conversations in the sidebar.
- Rename or delete chats as needed.
Contributions are welcome! If you'd like to improve the project, feel free to open an issue or submit a pull request.
This project is licensed under the MIT License.