Cross-platform desktop tool for chaining local AI models and plugins into powerful, agentic workflows. It supports prompt-driven orchestration, visual DAG editing, and full offline execution.
-
Updated
Jan 21, 2026 - Rust
Cross-platform desktop tool for chaining local AI models and plugins into powerful, agentic workflows. It supports prompt-driven orchestration, visual DAG editing, and full offline execution.
An intelligent local AI agent powered by open-source LLMs, featuring free web search, hybrid memory, and context-aware query rewriting for real-time, grounded answers.
🎬 Nano Cinema: An all-in-one local AI video production studio. Automatically orchestrates Llama-3 (Script), SDXL-Turbo (Visuals), EdgeTTS (Audio), and LTX-Video (Motion) into a seamless Python workflow. Create cinematic short films with no API fees, full privacy, and professional-grade editing logic included!!! 🚀
A lightweight, self-contained Python project for running a local large language model (LLM) with minimal dependencies. This system uses TinyLlama-1.1B-Chat-v1.0.0 and llama-cpp-python for inference, and Rich for a user-friendly console chat interface
Lightweight Ruby gem for interacting with locally running Ollama LLMs with streaming, chat, and full offline privacy.
A unified offline AI studio for text, image, audio, and video generation — all running locally on your machine, with no internet or cloud required.
A fully local desktop AI assistant built in C++ with wxWidgets, powered by llama.cpp and running offline.
Setup guide for AI-Mini PC. For hosting local LLM's via LM-Studio as RDP/headless-GUI Setup. In this example we'll use a Minisforum AI X1 Pro, AMD Ryzen AI 9 HX 370 / 64GB RAM
Add a description, image, and links to the local-ai-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-ai-llm topic, visit your repo's landing page and select "manage topics."