A powerful RAG (Retrieval-Augmented Generation) Chatbot that runs locally on your system, enabling intelligent document interaction and question-answering capabilities.
- π Advanced PDF document processing and analysis
- π Smart text chunking and semantic embedding generation
- π High-performance Redis-based vector search
- π€ LLM-powered intelligent responses to your queries
- π» Completely local execution for data privacy
- π― Accurate context-aware answers
- Python 3.8 or higher
- Docker (for Redis Stack)
- Hugging Face account
- CUDA-compatible GPU (optional, for better performance)
# Clone the repository
git clone https://github.com/namra4122/cli_docDost.git
cd cli_docdost
# Create and activate virtual environment
python -m venv venv
# On Unix/macOS
source venv/bin/activate
# On Windows
.\venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtYou'll need to authenticate with Hugging Face to download the required models:
huggingface-cli loginStart the Redis Stack container:
docker run -d --name redis-stack \
-p 6379:6379 \
-p 8001:8001 \
redis/redis-stack:latest
You can monitor your Redis instance at http://localhost:8001
- Start the application:
python main.py- Follow the interactive prompts to:
- Input your PDF document path
- Ask questions about your document
- Get AI-generated responses based on the document content
The application can be configured through config.py:
- Embedding model selection
- LLM model parameters
- Chunking settings
- Redis connection details
Contributions are warmly welcomed! Here's how you can help:
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
If you find this project helpful, please consider giving it a star on GitHub!