This project implements an AI chatbot leveraging LangChain, Retrieval-Augmented Generation (RAG), and Django to expose the chatbot's functionalities through API endpoints. The chatbot is designed to provide detailed and accurate responses based on structured data stored in a ChromaDB vector database.
- Conversational AI: Provides intelligent and context-aware responses based on Manu's resume and personality data.
- RAG-Based Search: Retrieves relevant information using similarity search from vectorized documents stored in ChromaDB.
- Agent Execution: Uses LangChain's reactive agent architecture for dynamic and task-specific responses.
- Django API: Exposes chatbot interactions through a RESTful API.
- Backend: Python, Django
- AI Framework: LangChain, LangGraph
- Vector Database: ChromaDB
- Embedding Models: OpenAI GPT models
- Document Processing: LangChain Text Splitter, Docx2txt, PyPDFLoader
- Python 3.8+
- Virtual environment tools like
venvorconda - OpenAI API Key (store in a
.envfile) - Django and required libraries (
requirements.txtfile provided)
-
Clone the repository:
git clone https://github.com/manugeorge04/PersonaAI.git cd PersonaAI -
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up the data directory:
-
Inside the project root directory, create a
datafolder with the following structure:data/ ├── Resume_Data/ # Store resume-related .docx files here └── Personality_Data/ # Store personality-related .docx files here -
Add
.docxfiles for:- Resume Data: Include one or more
.docxfiles with professional experience, projects, and achievements. - Personality Data: Include
.docxfiles with personal traits, hobbies, or character-related information.
- Resume Data: Include one or more
-
Example:
data/ ├── Resume_Data/ │ ├── resume1.docx │ ├── resume2.docx └── Personality_Data/ ├── personality_traits.docx ├── hobbies.docx -
You can add multiple
.docxfiles to each folder as needed.
-
-
Set up the environment variables:
- Create a
.envfile in the root directory with the following content:OPENAI_API_KEY=your_openai_api_key
- Create a
-
Initialize the vector database and process documents:
python vectorstore.py
-
Run the Django development server:
python manage.py runserver
- Start the server (
python manage.py runserver). - Use tools like Postman or
curlto send a POST request to the/send_message/endpoint.- Endpoint:
http://127.0.0.1:8000/send_message/ - Request Body:
{ "question": "What are Manu's skills?" } - Response:
{ "response": "Manu is skilled in Python, Django, LangChain, and more..." }
- Endpoint:
- Description: Handles user queries and returns responses based on structured data.
- Request Body:
{ "question": "Your question here" } - Response:
200 OK: Returns the chatbot's response.- Example:
{ "response": "Manu has experience working with AI, Python, and more..." }
chatbot-django-rag/
│
├── llm_utils/
│ ├── chatbot.py # Main chatbot logic
│ └── vectorstore.py # Vector database setup
│
├── views.py # API endpoint logic
│
├── data/
│ ├── Resume_Data/ # Directory for resume documents
│ └── Personality_Data/ # Directory for personality documents
│
├── requirements.txt # Python dependencies
├── manage.py # Django management script
├── .env # Environment variables
└── README.md # Project documentation
- Fork the repository.
- Create a new branch (
feature/new-feature). - Commit your changes.
- Push to the branch.
- Create a pull request.
This project is licensed under the MIT License.