Skip to content

novamind/local-text-assistant

Repository files navigation

A Proof of Concept (PoC) that demonstrates the potential of local data processing using the LangChain framework with LLMs.

Features

  • Local Inference Server: Utilizes Ollama running any supported LLM (customizable) on an NVIDIA GPU
  • LangChain: Employs WebBaseLoader and off-the-shelf chains for effective data handling and processing
  • Multi-Agent System: Includes agents for summarization and question answering
  • Streamlit Interface: Provides a user-friendly UI for interactions
  • Microservice Architecture: Uses Docker for containerized deployment

image

Quick Start

Follow these steps to clone the repository and start the service using Docker Compose.

  1. Clone the repository:
git clone https://github.com/novamind/local-text-assistant.git
cd local-text-assistant
  1. Start the service:
  • If your machine has an Nvidia GPU:
docker-compose -f docker-compose-cuda.yml up
  • Otherwise:
docker-compose -f docker-compose-cpu.yml up
  1. Visit http://localhost:8501 and provide your link to summarize the content

image

Configuration

Use the .env file to change the model settings.

About

Local data processing with LLaMA model for web page summaries and question answering

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •