Skip to content

Barefoot0/RAG_Study_Assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 RAG Study Assistant

A full-stack Retrieval-Augmented Generation (RAG) study assistant that answers questions only from your uploaded content — no hallucinations, just grounded facts.

Built with:

  • ⚛️ React (TypeScript)
  • 🐹 Go (backend API proxy)
  • 🐍 Python + FastAPI (RAG microservice)
  • 🧠 LangChain + OpenAI
  • 📦 Qdrant (vector DB for semantic search)

📸 Demo

Coming soon! (Optional: GIF or screenshot of app in action)


⚙️ Features

  • Semantic document ingestion with LangChain
  • Context-aware question answering using OpenAI
  • Modular backend with Go -- FastAPI HTTP bridge
  • Answers grounded in your own files (notes, PDFs, etc.)
  • Returns “I don’t know” when no relevant context is found to avoid hallucination
  • Fast and lightweight — runs locally with Docker + Vectors

🧰 Tech Stack

Layer Technology
Frontend React (TypeScript, Next.js)
Backend Go (API proxy → Python microservice)
Microservice FastAPI, LangChain, OpenAI
Vector Store Qdrant (Dockerized)

🚀 Getting Started

Prerequisites:

  • Docker + Docker Compose
  • Node.js + npm
  • Python 3.10+
  • Go 1.21+

Open separate terminal windows (Command Prompt recommended) for each of the following commands:

1. Start Qdrant (Vector DB)

cd qdrant docker-compose up -d

2. Start FastAPI (Python RAG Microservice)

cd .. uvicorn rag_pipeline.api:app --reload --port 8000

3. Start Go Backend

cd backend go run main.go

4. Start React Frontend

cd ../frontend npm install # Only needed once npm run dev

5. Ingest Your Documents (Required Before Querying)

cd ../rag_pipeline python ingest.py path/to/your/file.txt

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published