LTX Desktop is an open-source desktop app for generating videos with LTX models — locally on supported Windows NVIDIA GPUs, with an API mode for unsupported hardware and macOS.
Status: Beta. Expect breaking changes. Frontend architecture is under active refactor; large UI PRs may be declined for now (see
CONTRIBUTING.md).
- Text-to-video — generate video clips from text prompts
- Image-to-video — animate a still image into video
- Audio-to-video — drive video generation from an audio track
- Image generation — create images with ZIT (local) or fal API
- Image editing (img2img) — edit existing images with ZIT Edit
- Video Retake — re-generate portions of an existing video
- IC-LoRA — identity-consistent generation with LoRA weights
- Video Extend — continue generating from the last frame of a video
- Prompt Enhancement — AI-powered prompt rewriting (via LTX API or Palette)
- Batch Builder — queue multiple generation jobs at once
- List mode — add prompts one-by-one with per-job settings
- Import mode — bulk-import prompts from CSV, JSON, or plain text files
- Grid Sweep mode — combinatorial parameter sweeps (prompts × seeds × models)
- Timeline import — import an edited timeline as a batch to re-generate all segments
- Gallery — browse, filter, and manage all generated images and videos
- Prompt Library — save, tag, and reuse favorite prompts
- Characters — store character descriptions for consistent generation
- Styles — save and apply visual style presets
- References — manage reference images for guided generation
- Wildcards — define placeholder tokens that expand to random values
- Directors Palette sync — connect to Directors Palette for cloud-synced library content
- Email/password login — authenticate directly or via deep link
- Credit balance & cost tracking — view remaining credits in the header, see estimated cost on Generate buttons before submitting
- Automatic credit deduction — API-slot jobs automatically deduct credits after successful generation
- Seedance video generation — generate videos via Seedance 1.5 Pro through the Replicate API
- Video Editor — multi-track timeline editor with clips, transitions, and keyframes
- Video Projects — save and reopen editing sessions
- FFmpeg export — export final videos with configurable codec and quality settings
| Platform / hardware | Generation mode | Notes |
|---|---|---|
| Windows + CUDA GPU with ≥32GB VRAM | Local generation | Downloads model weights locally |
| Windows (no CUDA, <32GB VRAM, or unknown VRAM) | API-only | LTX API key required |
| macOS (Apple Silicon builds) | API-only | LTX API key required |
| Linux | Not officially supported | No official builds |
In API-only mode, available resolutions/durations may be limited to what the API supports.
- Windows 10/11 (x64)
- NVIDIA GPU with CUDA support and ≥32GB VRAM (more is better)
- 16GB+ RAM (32GB recommended)
- Plenty of free disk space for model weights and outputs
- Apple Silicon (arm64)
- macOS 13+ (Ventura)
- Stable internet connection
- Download the latest installer from GitHub Releases: Releases
- Install and launch LTX Desktop
- Complete first-run setup
LTX Desktop stores app data (settings, models, logs) in:
- Windows:
%LOCALAPPDATA%\LTXDesktop\ - macOS:
~/Library/Application Support/LTXDesktop/
Model weights are downloaded into the models/ subfolder (this can be large and may take time).
On first launch you may be prompted to review/accept model license terms (license text is fetched from Hugging Face; requires internet).
Text encoding: to generate videos you must configure text encoding:
- LTX API key (cloud text encoding) — text encoding via the API is completely FREE and highly recommended to speed up inference and save memory. Generate a free API key at the LTX Console. Read more.
- Local Text Encoder (extra download; enables fully-local operation on supported Windows hardware) — if you don't wish to generate an API key, you can encode text locally via the settings menu.
The LTX API is used for:
- Cloud text encoding and prompt enhancement — FREE; text encoding is highly recommended to speed up inference and save memory
- API-based video generations (required on macOS and on unsupported Windows hardware) — paid
- Retake — paid
An LTX API key is required in API-only mode, but optional on Windows local mode if you enable the Local Text Encoder.
Generate a FREE API key at the LTX Console. Text encoding is free; video generation API usage is paid. Read more.
When you use API-backed features, prompts and media inputs are sent to the API service. Your API key is stored locally in your app data folder — treat it like a secret.
Used for Z Image Turbo text-to-image generation in API mode. When enabled, image generation requests are sent to fal.ai.
Create an API key in the fal dashboard.
Used for Seedance 1.5 Pro video generation. When enabled, video generation requests are sent to Replicate.
Create an API key in the Replicate dashboard.
Used for AI prompt suggestions. When enabled, prompt context and frames may be sent to Google Gemini.
Connect to Directors Palette to sync library content (characters, styles, references), use cloud-based prompt enhancement, and track credit usage. Sign in via email/password in Settings > Palette Connection.
Credits are consumed when generating via API-backed models (cloud video, Seedance, cloud image). Local GPU generations are free. Credit balance and per-generation costs are displayed in the UI.
LTX Desktop is split into three main layers:
- Renderer (
frontend/): TypeScript + React UI.- Calls the local backend over HTTP at
http://localhost:8000. - Talks to Electron via the preload bridge (
window.electronAPI).
- Calls the local backend over HTTP at
- Electron (
electron/): TypeScript main process + preload.- Owns app lifecycle and OS integration (file dialogs, native export via ffmpeg, starting/managing the Python backend).
- Security: renderer is sandboxed (
contextIsolation: true,nodeIntegration: false).
- Backend (
backend/): Python + FastAPI local server.- Orchestrates generation, model downloads, and GPU execution.
- Calls external APIs only when API-backed features are used.
- Output files follow the naming convention
dd_{model}_{prompt_slug}_{timestamp}.{ext}for easy identification.
graph TD
UI["Renderer (React + TS)"] -->|HTTP: localhost:8000| BE["Backend (FastAPI + Python)"]
UI -->|IPC via preload: window.electronAPI| EL["Electron main (TS)"]
EL --> OS["OS integration (files, dialogs, ffmpeg, process mgmt)"]
BE --> GPU["Local models + GPU (when supported)"]
BE --> EXT["External APIs (only for API-backed features)"]
EL --> DATA["App data folder (settings/models/logs)"]
BE --> DATA
Prereqs:
- Node.js
uv(Python package manager)- Python 3.12+
- Git
Setup:
# macOS
pnpm setup:dev:mac
# Windows
pnpm setup:dev:winRun:
pnpm devDebug:
pnpm dev:debugdev:debug starts Electron with inspector enabled and starts the Python backend with debugpy.
Typecheck:
pnpm typecheckBackend tests:
pnpm backend:testBuilding installers:
- See
INSTALLER.md
LTX Desktop collects minimal, anonymous usage analytics (app version, platform, and a random installation ID) to help prioritize development. No personal information or generated content is collected. Analytics is enabled by default and can be disabled in Settings > General > Anonymous Analytics. See TELEMETRY.md for details.
INSTALLER.md— building installersTELEMETRY.md— telemetry and privacybackend/architecture.md— backend architecture
See CONTRIBUTING.md.
Apache-2.0 — see LICENSE.txt.
Third-party notices (including model licenses/terms): NOTICES.md.
Model weights are downloaded separately and may be governed by additional licenses/terms.


