A Proof of Concept (PoC) that demonstrates the potential of local data processing using the LangChain framework with LLMs.
- Local Inference Server: Utilizes Ollama running any supported LLM (customizable) on an NVIDIA GPU
- LangChain: Employs WebBaseLoader and off-the-shelf chains for effective data handling and processing
- Multi-Agent System: Includes agents for summarization and question answering
- Streamlit Interface: Provides a user-friendly UI for interactions
- Microservice Architecture: Uses Docker for containerized deployment
Follow these steps to clone the repository and start the service using Docker Compose.
- Clone the repository:
git clone https://github.com/novamind/local-text-assistant.git
cd local-text-assistant- Start the service:
- If your machine has an Nvidia GPU:
docker-compose -f docker-compose-cuda.yml up- Otherwise:
docker-compose -f docker-compose-cpu.yml up- Visit http://localhost:8501 and provide your link to summarize the content
Use the .env file to change the model settings.

