Legal_Assistant is a practical RAG pipeline for legal documents: embed your corpus, persist a Chroma index, and get context-aware answers from a small web UI (app.py) or a no-frills terminal bot (Chat_bot.py). Built for quick iteration — reindex from the included notebook and swap models or prompt templates without surgery.
Client: HTML, CSS, JS (Flask + Jinja2 templates)
Server: Python, Flask
Database / Storage: Chroma vector Database
ML / AI: Hugging Face embeddings, sentence-transformers, langchain.
Python, Flask, HTML, CSS, JavaScript, Jinja2, RAG, embeddings, sentence-transformers, Hugging Face Transformers, Chroma, Pandas, NumPy, JSON, Virtualenv, VS Code, Jupyter Notebook, Web UI templating with Flask, static assets (CSS/JS), Legal text analysis, document retrieval, RAG pipelines
To run this project, you will need to add the following environment variables to your .env file
HUGGINGFACEHUB_ACCESS_TOKEN
To deploy this project run
python -m venv venvLinux / macOS
source venv/bin/activateWindows PowerShell
.\venv\Scripts\ActivateInstall dependencies
pip install -r requirements.txtRun the app
python app.py