"""
A smart assistant that ingests documents and returns contextual, AI-generated answers using LangChain + HuggingFace + FAISS + RAG architecture.
- PDF ingestion
- LLM-based QA (Retrieval Augmented Generation)
- HuggingFace LLM backend
- Optional image generation using diffusion models
- Install dependencies
pip install -r requirements.txt- Add Hugging Face token to
.env
HUGGINGFACEHUB_API_TOKEN=your_token_here
- Run the app
streamlit run ui/app.py"""
- The application is deployed at https://multimediaresearchassistant.streamlit.app/ and is available for all.





