Skip to content

rathod-0007/LocalAIAgent-with-RAG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🤖 Local AI Agent with Python (Ollama, LangChain & RAG)

A locally hosted AI agent built using Python, Ollama, LangChain, and RAG (Retrieval-Augmented Generation).
This project enables offline LLM inference, semantic search, and contextual question answering using a vector database.


🚀 Features

  • 🧠 Local language model integration via Ollama
  • 🔍 RAG-based context retrieval for enhanced responses
  • 🗂️ ChromaDB as a vector store for embedding management
  • LangChain pipeline for modular orchestration
  • 🔊 Seamless local and offline LLM usage
  • 💾 Persistent and queryable data storage
  • 🔐 Privacy-friendly — no external API calls required

🏗️ Tech Stack

  • Python 3.10+
  • Ollama – Local LLM runtime
  • LangChain – Framework for chaining LLM logic
  • ChromaDB – Vector database for document retrieval
  • FAISS / SentenceTransformers – For embedding generation
  • VS Code / Jupyter / Colab – Development environment

⚙️ Installation & Setup

1️⃣ Clone the Repository

git clone https://github.com/rathod-0007/LocalAIAgentWithRAG.git
cd LocalAIAgentWithRAG

2️⃣ Create and Activate Virtual Environment

python -m venv venv
source venv/bin/activate    # On macOS/Linux
venv\Scripts\activate     # On Windows

3️⃣ Install Dependencies

pip install -r requirements.txt

4️⃣ Run the Application

python main.py

🧩 Architecture Overview

User Query → LangChain → Retriever (ChromaDB) → Ollama Model → Response
  • LangChain: Handles query flow and RAG orchestration
  • Ollama: Runs local LLMs (e.g., Llama3, Mistral)
  • ChromaDB: Manages embeddings for semantic retrieval

📁 Project Structure

LocalAIAgentWithRAG/
│
├── main.py               # Entry point of the application
├── retriever.py          # Vector retrieval and embedding logic
├── vector/               # Database and embedding utilities
├── data/                 # Local dataset/documents
├── requirements.txt      # Dependencies
└── README.md             # Project documentation

🧠 Example Use Cases

  • Chat with local documents or datasets
  • Build knowledge-grounded assistants
  • Perform semantic search and retrieval
  • Run LLMs fully offline

💡 Future Enhancements

  • 🧩 Integration with advanced embedding models (e.g., E5, OpenAI)
  • 🌐 Web-based UI with Streamlit
  • 📊 Conversation analytics dashboard
  • 💬 Multi-agent reasoning system

🪪 License

This project is licensed under the MIT License.
Feel free to use, modify, and distribute it with attribution.


👤 Author

Rathod Pavan Kumar Naik
GitHub | LinkedIn

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages