Releases: NEURASCOPE/neurascreen
v1.6.0 — CLI Voice Management
CLI Voice Management
The voices.json configuration (previously GUI-only) is now available from the CLI.
New commands
neurascreen voices list # List all voices per provider
neurascreen voices list -p openai # Filter by provider
neurascreen voices add gradium abc123 "My voice" # Add a voice
neurascreen voices set-default openai nova # Set default voice
neurascreen voices remove gradium abc123 # Remove a voiceConfig fallback
When TTS_VOICE_ID or TTS_MODEL are empty in .env, Config.load() now falls back to the provider defaults from ~/.neurascreen/voices.json.
Validation warning
neurascreen validate now warns if the configured voice is not found in voices.json for the active provider (warning only, does not fail).
Shared config
The ~/.neurascreen/voices.json file is shared between the GUI (TTS panel) and the CLI. Changes made via either interface are visible to both.
Stats
- 370 tests (160 CLI + 11 voices + 199 GUI), 0 regression
- 1 issue closed (#33)
v1.5.0 — Desktop GUI
Desktop GUI — Complete
NeuraScreen now includes a full desktop application built with PySide6.
New features
Scenario Editor — Visual step list with drag-reorder, adaptive detail panel for 14 action types, JSON source view with syntax highlighting, split view, undo/redo, templates
Execution Panel — Run validate/preview/run/full from the GUI with real-time colored console, headed mode by default
Configuration Manager — Visual .env editor with 7 tabs (Application, Browser, Screen Capture, TTS, Login Selectors, Canvas Selectors, Directories), validation, import/export
TTS & Audio Preview — Per-provider voice config, per-step audio preview, pronunciation helper, narration statistics
Output Browser — Video listing with integrated player (QMediaPlayer), SRT subtitles viewer, YouTube chapters viewer, auto-refresh
Macro Recorder (#31) — Record browser interactions from the GUI with live event feed, cleanup options (dedup clicks, merge navigations, cap waits), direct import into editor
Selector Validator (#32) — Verify scenario selectors against the real DOM using Playwright headless, found/not found/multiple status, suggestions, double-click to jump to step
Scenario Statistics (#32) — Steps count, actions breakdown, narrated/silent ratio, word count, estimated duration, unique URLs and selectors
Scenario Diff (#32) — Compare two scenario files side by side with added/removed/modified/unchanged status
Autosave & Recovery (#32) — Periodic autosave every 60s, recovery prompt on startup
Theme Engine — Dark teal (default) and light themes with full Fusion style support, QPalette sync, custom themes via JSON
Other improvements
- App icon (N teal on dark) with cross-platform process name
- Fusion style for consistent QSS rendering across platforms
- Light theme: fixed pressed/hover/selected states, QPalette per ColorGroup
- 19 screenshots in documentation
Stats
Install
pip install neurascreen[gui]
neurascreen guiv1.4.0 — Advanced
Advanced features
Macro Recorder (#9)
Record browser interactions and generate JSON scenarios automatically:
neurascreen record http://localhost:3000 -t "My demo"Captures clicks, navigations, scrolls and keyboard events. Outputs a valid scenario ready to edit and generate.
Docker Container (#13)
Headless video generation in a container — no display required:
docker build -t neurascreen .
docker run --rm \
-v ./scenarios:/app/examples \
-v ./output:/app/output \
-e APP_URL=http://host.docker.internal:3000 \
neurascreen full examples/demo.json --srt --chaptersIncludes Xvfb (virtual display), PulseAudio (audio playback), ffmpeg and Playwright Chromium.
Documentation
- Cross-Platform Setup — macOS, Linux, Windows
- Macro Recorder Guide — usage, workflow, limitations
- Subtitles & Chapters — SRT + YouTube chapters
- Docker Guide — build, CI/CD, troubleshooting
- Updated scenario-guide.md and CONTRIBUTING.md
Stats
- 160 tests (26 new), all passing
- All 4 milestones complete (v1.1 → v1.4)
Full Changelog: v1.3.0...v1.4.0
v1.3.0 — Production features
Production features
SRT Subtitles (#10)
--srtflag onrunandfullcommands- Generates
.srtfile alongside video from narration timestamps - Proper SRT timing format synced with real audio playback
YouTube Chapter Markers (#11)
--chaptersflag onrunandfullcommands- Generates
.chapters.txtwith timestamps from step titles - Auto-inserts
00:00 Introductionwhen first chapter starts later - Ready to paste into YouTube description
Batch Mode (#12)
neurascreen batch <folder>processes all JSON scenarios in a folder- Validates all scenarios upfront, skips invalid ones
--srt,--chapters,--no-narrationflags- Summary report with status per scenario
Usage
# Video + subtitles + chapters
neurascreen full --srt --chapters scenario.json
# Batch all scenarios
neurascreen batch scenarios/ --srt --chaptersStats
- 134 tests (21 new), all passing
- Tested with 67-step scenario (10m26s full generation)
Full Changelog: v1.2.0...v1.3.0
v1.2.0 — Cross-platform
Cross-platform support
NeuraScreen now runs on macOS, Linux and Windows.
Screen capture
- macOS: ffmpeg avfoundation (unchanged)
- Linux: ffmpeg x11grab with configurable
CAPTURE_DISPLAY - Windows: ffmpeg gdigrab (full desktop or specific window)
Audio playback
- macOS: afplay (unchanged)
- Linux: paplay (PulseAudio) with aplay (ALSA) fallback
- Windows: PowerShell SoundPlayer
New
neurascreen/platform.py— automatic OS detection and platform-specific command buildersCAPTURE_DISPLAYconfig variable for Linux/Windows display targeting- Dependency check helpers (
check_capture_dependencies(),check_audio_dependencies()) - 29 new unit tests (113 total)
Closes
- #1 Linux screen capture via x11grab
- #2 Linux audio playback via aplay/paplay
- #8 Windows screen capture via gdigrab
Full Changelog: v1.1.0...v1.2.0
NeuraScreen v1.1.0 — Packaging, Configurable Selectors & 84 Tests
What's New
pip install neurascreen (#6, #7)
- Install via
pip install neurascreenwith optional TTS dependencies ([gradium],[all]) neurascreenCLI entry point:neurascreen validate,neurascreen full,neurascreen --version- Package renamed from
src/toneurascreen/for proper Python packaging pyproject.tomlwith full metadata and console_scripts
Configurable selectors (#3, #4)
- Login form selectors customizable via
.env(LOGIN_EMAIL_SELECTOR,LOGIN_PASSWORD_SELECTOR,LOGIN_SUBMIT_SELECTOR) - Canvas/modal selectors for
drag,delete_node,close_modal,zoom_out,fit_view— all configurable - Per-scenario override via
"selectors"JSON field - Priority: scenario JSON >
.env> defaults (React Flow) - Works with any web app, not just React Flow
84 unit tests (#5)
- 19 tests for assembler (ffmpeg, video duration, convert, assemble, silence, timestamps)
- 20 tests for utils (slugify, format_duration, setup_logger)
- CI runs on Python 3.12 + 3.13 with ffmpeg
CI & Security
- GitHub Actions workflow updated for new package structure
permissions: contents: readon all workflows (CodeQL compliance)- Branch protection on main (PR + CI required)
Full Changelog
NeuraScreen v1.0.0 — Initial Open Source Release
NeuraScreen v1.0.0
First public release of NeuraScreen, an automated demo video generator for web applications.
Features
- JSON scenario format — describe browser actions and narration text
- Playwright browser engine — drives a real Chromium browser
- Native screen capture — ffmpeg avfoundation (macOS) for high quality recording
- Real-time audio sync — narration played during capture for perfect synchronization
- 5 TTS providers — OpenAI, ElevenLabs, Gradium, Google Cloud, Coqui (self-hosted)
- AI-generated scenarios — works with Claude Code, ChatGPT, Mistral, Gemini, Ollama
- 14 browser actions — navigate, click, type, scroll, drag, wait, and more
- CLI interface — validate, preview, run, full, list
- Multi-monitor support — capture any screen via CDP positioning
Quick Start
pip install -r requirements.txt && playwright install chromium
cp .env.example .env # configure APP_URL and TTS
python -m src full examples/01-simple-navigation.jsonRequirements
- Python 3.12+
- ffmpeg
- macOS (for screen capture)
- TTS API key (for narration)