A full-stack Retrieval-Augmented Generation (RAG) study assistant that answers questions only from your uploaded content — no hallucinations, just grounded facts.
Built with:
- ⚛️ React (TypeScript)
- 🐹 Go (backend API proxy)
- 🐍 Python + FastAPI (RAG microservice)
- 🧠 LangChain + OpenAI
- 📦 Qdrant (vector DB for semantic search)
Coming soon! (Optional: GIF or screenshot of app in action)
- Semantic document ingestion with LangChain
- Context-aware question answering using OpenAI
- Modular backend with Go -- FastAPI HTTP bridge
- Answers grounded in your own files (notes, PDFs, etc.)
- Returns “I don’t know” when no relevant context is found to avoid hallucination
- Fast and lightweight — runs locally with Docker + Vectors
| Layer | Technology |
|---|---|
| Frontend | React (TypeScript, Next.js) |
| Backend | Go (API proxy → Python microservice) |
| Microservice | FastAPI, LangChain, OpenAI |
| Vector Store | Qdrant (Dockerized) |
Prerequisites:
- Docker + Docker Compose
- Node.js + npm
- Python 3.10+
- Go 1.21+
Open separate terminal windows (Command Prompt recommended) for each of the following commands:
cd qdrant docker-compose up -d
cd .. uvicorn rag_pipeline.api:app --reload --port 8000
cd backend go run main.go
cd ../frontend npm install # Only needed once npm run dev
cd ../rag_pipeline python ingest.py path/to/your/file.txt