This is a beginner-friendly Retrieval-Augmented Generation (RAG) system built using:
ChromaDB – Vector Database
LangChain – Retrieval + LLM Pipeline
Google Generative AI API – LLM for generating answers
The project demonstrates how to store document embeddings, retrieve relevant chunks, and generate accurate answers using RAG.
🔹 Add your own documents for embedding
🔹 Store embeddings in ChromaDB
🔹 Retrieve most relevant chunks using similarity search
🔹 Use Google Generative AI to generate final answers
🔹 Simple, clean code for beginners
🔹 Easy to customize and extend
- Load documents
- Split into chunks
- Convert chunks → embeddings
- Store embeddings in ChromaDB
- User asks a question
- System retrieves the most similar chunks
- Google LLM generates answer using retrieved text
RAG = Retrieval + Generation