Generate anything — text, images, video. Locally. Uncensored.
No cloud. No data collection. No API keys. Auto-detects 12 local backends. Your AI, your rules.
The only desktop app that runs AI chat, image, and video generation — locally, one click, no cloud.
Download · Features · Quick Start · Why This App? · Roadmap
| Chat with Personas | Image / Video Generation |
|---|---|
![]() |
![]() |
| Model Manager | Create View with Parameters |
![]() |
![]() |
Reliability hotfix: bulletproof the Create model picker against any pathological store state.
Drop-in patch on top of v2.4.0. One focused fix — no new features, no dependency bumps.
- CreateTopControls picker dropdown hardened against any non-array list value — reported on Discord in
#bug-reportsby @phantomderp: the web UI crashes when clicking the model picker at the top of the Create tab. We'd already added the expected store fields + setters in v2.3.9/v2.4.0 (imageModelList,videoModelList,comfyRunning,setComfyRunning), but if the field ever arrives as anything other than an array — stale persisted state from a very old install, Zustand rehydration racing the first render, a corrupted localStorage entry, or an old .exe that predates aa31bab — the.length/.map()calls would still kill the app on picker-open. Read site now passes the list throughArray.isArray(rawList) ? rawList : [], so undefined / null / object / string / number all render as the empty-state card instead of crashing.
- 2226 / 2226 Vitest tests green (+7 new pathological-state regression tests in
createStore.test.ts) tsc --noEmitclean- Bundled JS contains the fix (grep-confirmed in the minified bundle)
- Dev-preview E2E: injected six pathological types into the store (
undefined,null,{},"corrupted",42, populated array), clicked the picker each time — zero errors - Installed-binary E2E, happy path: Ollama + ComfyUI + 3 image + 3 video models — picker opens with all six, no crash either mode
- Installed-binary E2E, true fresh-user: Ollama folder renamed, ComfyUI folder renamed, LU AppData wiped — picker shows "Start ComfyUI to load models" empty-state, no crash either mode; all six tabs (Chat / Create / Compare / Benchmark / Models / Settings) load cleanly
- No breaking changes, no localStorage migration — upgrade in place
For older releases, see CHANGELOG.md.
| Feature | Locally Uncensored | Open WebUI | LM Studio | SillyTavern |
|---|---|---|---|---|
| AI Chat | Yes | Yes | Yes | Yes |
| Coding Agent (Codex) | Yes | No | No | No |
| 14 Agent Tools + MCP | Yes | No | No | No |
| Plug & Play Setup | 12 Backends | No | Built-in | No |
| Multi-Provider (20+ Presets) | Yes | Yes | Yes | No |
| A/B Model Compare | Yes | No | No | No |
| Local Benchmark | Yes | No | No | No |
| Image Generation | Yes | No | No | No |
| Image-to-Image | Yes | No | No | No |
| Image-to-Video | Yes | No | No | No |
| Video Generation | Yes | No | No | No |
| File Upload + Vision | Yes | Yes | Yes | No |
| Thinking Mode | Yes | No | No | No |
| Granular Permissions | 7 Categories | No | No | No |
| Uncensored by Default | Yes | No | No | Partial |
| Memory System | Yes | Plugin | No | No |
| Agent Workflows | Yes | No | No | No |
| Document Chat (RAG) | Yes | Yes | No | No |
| Voice (STT + TTS) | Yes | Partial | No | No |
| Remote Access (Phone) | Yes | No | No | No |
| Plugins (Caveman + Personas) | Yes | No | No | Yes |
| Auto-Update | Yes | No | Yes | No |
| Open Source | AGPL-3.0 | MIT | No | AGPL |
| No Docker | Yes | No | Yes | Yes |
- Plug & Play Setup — First-launch wizard auto-detects 12 local backends. Nothing installed? One-click in-app Ollama download and install with progress bar. ComfyUI one-click install with step-by-step progress. Configurable ComfyUI port and path in Settings. Zero config needed.
- Uncensored AI Chat — Abliterated models with zero restrictions. Streaming + thinking display.
- Multi-Provider — 20+ presets. Local: Ollama, LM Studio, vLLM, KoboldCpp, llama.cpp, LocalAI, Jan, TabbyAPI, GPT4All, Aphrodite, SGLang, TGI. Cloud: OpenAI, Anthropic, OpenRouter, Groq, Together, DeepSeek, Mistral. Switch per conversation.
- Codex Coding Agent — Live streaming between tool calls, continue capability, AUTONOMY CONTRACT. File tree, folder picker, up to 50 iterations.
- Agent Mode — 14 tools + MCP: web search/fetch, file I/O, shell, code execution, screenshots, system info, time. Parallel execution, sub-agents, budget system.
- Remote Access — Access your AI from your phone via LAN or Cloudflare Tunnel. Full mobile web app with Agent Mode, Codex, plugins, file attach.
- Image Generation — FLUX 2 Klein, FLUX.1 (schnell/dev), Z-Image Turbo/Base, Juggernaut XL, RealVisXL, DreamShaper XL via ComfyUI. Full parameter control, no content filter.
- Image-to-Image — Upload a source image, adjust denoise strength, transform with any image model.
- Video Generation — Wan 2.1, HunyuanVideo 1.5, LTX 2.3, AnimateDiff Lightning, CogVideoX, FramePack F1 on your GPU.
- Image-to-Video — FramePack F1 (6 GB VRAM), CogVideoX 5B, SVD-XT. Upload an image, get video.
- Thinking Mode — Provider-agnostic. See the AI's reasoning before the answer. Toggle from chat input.
- File Upload + Vision — Drag & drop, paste, clip button. Vision models analyze images.
- Granular Permissions — 7 tool categories, 3 permission levels, per-conversation overrides.
- Smart Tool Selection — Reduces tool definitions per request by ~80%. JSON repair for local LLMs.
- Memory System — Persistent across conversations. Auto-extraction. Export/import.
- Agent Workflows — Multi-step chains. 3 built-in (Research, Summarize URL, Code Review). Visual builder.
- Model A/B Compare — Same prompt, two models, side by side. Parallel streaming.
- Local Benchmark — One-click benchmark any model. Tokens/sec leaderboard.
- Document Chat (RAG) — Upload PDFs, DOCX, TXT. Hybrid search with source citations.
- Voice Chat — Push-to-talk STT + sentence-level TTS streaming.
- 20+ Personas — Pre-built characters. Switch without prompt engineering.
- Chat Export — Markdown or JSON. Token counter. Keyboard shortcuts.
- Plugins Dropdown — Caveman Mode (Off/Lite/Full/Ultra for terse responses) + 20+ Personas in one menu. Per-chat. Works in Chat, Agent, Codex.
- Auto-Update — Signed NSIS installer. In-app download with progress bar. User-controlled restart (no forced updates). Settings survive updates.
- Standalone Desktop App — Tauri v2 Rust backend. Download .exe, run it.
- Model Load/Unload — iOS-style toggle in header. Load into VRAM, unload when done.
- AE-Style Header — Clean typography navigation. Models, Settings, Downloads at a glance.
- Privacy First — Zero tracking, all API calls proxied locally. ComfyUI process auto-killed on app close.
- Desktop: Tauri v2 (Rust backend, standalone .exe)
- Frontend: React 19, TypeScript, Tailwind CSS 4, Framer Motion
- State: Zustand with localStorage persistence
- AI Backend: 20+ providers (Ollama, LM Studio, vLLM, KoboldCpp, llama.cpp, LocalAI, Jan, OpenAI, Anthropic, OpenRouter, Groq, and more), ComfyUI, faster-whisper
- Build: Vite 8 (dev), Tauri CLI (production)
Download the installer from Releases:
.exe— NSIS installer (recommended).msi— Windows Installer
Other platforms: The source code builds on Linux and macOS via
npm run tauri build, but only Windows is officially tested and supported.
Plug & Play: Just install and launch. The setup wizard auto-detects all 12 supported local backends (Ollama, LM Studio, vLLM, KoboldCpp, llama.cpp, LocalAI, Jan, GPT4All, text-generation-webui, TabbyAPI, Aphrodite, SGLang). Nothing installed yet? The wizard shows one-click install links for every backend.
Antivirus warning? Some engines (ESET, Avast, Microsoft SmartScreen) flag the installer as suspicious — this is a false positive caused by heuristics on unsigned NSIS installers that download other binaries. The installer is built by GitHub Actions from public source on
master(.github/workflows/release.yml). The auto-update channel is signed against a public minisign key. Full context, verification steps, and one-click vendor submission links: see SECURITY.md.
New to Locally Uncensored? Read the Getting Started Guide with screenshots for every step.
git clone https://github.com/PurpleDoubleD/locally-uncensored.git
cd locally-uncensored
npm install
npm run dev
⚠️ Just want to use the app? Grab the installer from Releases (the.exeor.msiin the Download section above). That gives you the full Tauri desktop app with auto-update. The commands below start LU in browser dev-mode — fewer features, Vite proxy noise, meant for contributing to the codebase.
git clone https://github.com/PurpleDoubleD/locally-uncensored.git
cd locally-uncensored
setup.bat # Windows — installs Node, Git, Ollama, then npm run dev
# setup.sh # macOS / Linux equivalentLaunches at http://localhost:5173 in your default browser.
Open the Create tab. ComfyUI is auto-detected or one-click installed. Models download with one click. Workflow is set to Auto — just write a prompt and hit Generate.
| Model | VRAM | Best For |
|---|---|---|
| Qwen 3.6 35B MoE | 24 GB | Vision + agentic coding + thinking. Brand new. |
| GLM-4.7-Flash IQ2 | 12 GB | Strongest 30B class. Tool calling. 198K context. |
| Gemma 4 E4B | 4 GB | Lightweight, fast, great for small GPUs. |
| Qwen 3.5 35B MoE | 16 GB | Best agentic, 256K context. SWE-bench leader. |
| Gemma 4 31B | 16 GB | Frontier dense model, native tools + vision. |
| Hermes 3 8B | 6 GB | Agent Mode. Uncensored + tool calling. |
| DeepSeek R1 (8B-70B) | 6-48 GB | Chain-of-thought reasoning. |
| Model | VRAM | Notes |
|---|---|---|
| FLUX.1 Schnell / Dev | 8-10 GB | Best text-to-image. Fast (schnell) or quality (dev). |
| FLUX 2 Klein 4B | 8-10 GB | Next-gen, fastest FLUX model. |
| ERNIE-Image Turbo | 24 GB | Baidu DiT, 8 steps, 1024x1024. New. |
| Z-Image Turbo | 10-16 GB | Uncensored, 8-15 sec per image. |
| Juggernaut XL V9 | 6 GB | Best photorealistic SDXL. |
| Model | VRAM | Notes |
|---|---|---|
| Wan 2.1 T2V 1.3B | 8-10 GB | Fast entry point, 480p. |
| Wan 2.1 T2V 14B | 12+ GB | High quality, 720p. |
| FramePack F1 (I2V) | 6 GB | Image-to-video, revolutionary low VRAM. |
| AnimateDiff Lightning | 6-8 GB | Ultra-fast 4-step animation. |
| HunyuanVideo 1.5 | 12+ GB | Excellent temporal consistency. |
- Plug & Play Setup (auto-detect 12 local backends, one-click install links)
- Codex Coding Agent
- MCP Tool Registry (13 tools)
- Granular Permissions (7 categories)
- File Upload + Vision
- Thinking Mode (provider-agnostic)
- Model Load/Unload from header
- Multi-Provider (20+ presets)
- Agent Mode + Workflows
- Memory System
- A/B Compare + Local Benchmark
- RAG / Document Chat
- Voice Chat (STT + TTS)
- ComfyUI Plug & Play (auto-detect, one-click install)
- 20 Image + Video Model Bundles
- Image-to-Image (I2I)
- Image-to-Video (I2V) — FramePack, CogVideoX, SVD
- Z-Image + FLUX 2 + ERNIE-Image support
- Dynamic Workflow Builder (15 strategies)
- VRAM-Aware Model Filtering
- Think Mode in Chat Input
- Remote Access (LAN + Cloudflare Tunnel)
- Mobile Web App (Agent, Codex, Plugins, Thinking)
- Codex Streaming + Continue + Autonomy Contract
- Agent 13-Phase Rewrite (parallel, budget, sub-agents, MCP)
- Auto-Update (signed NSIS installer)
- Qwen 3.6 Day-0 Support
- Plugins Dropdown (Caveman + Personas)
- Voice Mode (Qwen Omni live voice)
- Upscale + Inpainting
git clone https://github.com/PurpleDoubleD/locally-uncensored.git
cd locally-uncensored
npm install
npm run dev # Development
npm run tauri build # Production binary| Platform | Status | Download |
|---|---|---|
| Windows (10/11) | Fully tested | .exe / .msi |
| Linux / macOS | Build from source | npm run tauri build |
Join the Discord: https://discord.gg/nHnGnDw2c8. Ask questions, share what you built, or help others in our forum channels — chat / image gen / video gen / coding agent.
Check out the Contributing Guide. See open issues or the Roadmap.
AGPL-3.0 License — see LICENSE.
Your data stays on your machine.



