Skip to content

settinghead/voxlert

Repository files navigation

Voxlert Demo

CLI Integration

Voxlert

LLM-generated voice notifications for Claude Code, Cursor, OpenAI Codex, and OpenClaw, spoken by game characters like the StarCraft Adjutant, Kerrigan, C&C EVA, SHODAN, and more.

Why Voxlert?

Existing notification chimes (like peon-ping) do a great job of telling you when something happened, but not what happened or which agent needs your attention. If you have several agent sessions running at once, you end up alt-tabbing through windows just to find the one waiting on you.

Voxlert makes each session speak in a distinct character voice with its own tone and vocabulary. You hear "Query efficiency restored to nominal" from the HEV Suit in one window and "Pathetic test suite for code validation processed" from SHODAN in another, and you know immediately what changed. Because phrases are generated by an LLM instead of picked from a tiny fixed set, they stay varied instead of becoming wallpaper.

Who this is for

Voxlert is for users who:

  • Run two or more AI coding agent sessions concurrently (Claude Code, Cursor, Codex, OpenClaw)
  • Get interrupted by notification chimes but can't tell which window needs attention
  • Want ambient audio feedback that doesn't require looking at a screen
  • Are comfortable installing local tooling (Node.js, optionally Python for TTS)

If you run a single agent session and it's always in focus, Voxlert adds personality but not much utility. If you run several at once and context-switch between them, it's meaningfully useful.

Quick Start

1. Install prerequisites

Minimum: Node.js 18+ and afplay (macOS built-in) or FFmpeg (Windows/Linux). That's enough to get started; TTS and SoX are optional.

Aspect macOS Windows Linux
Node.js 18+ nodejs.org or brew install node nodejs.org or winget install OpenJS.NodeJS nodejs.org or distro package (for example sudo apt install nodejs)
Audio playback Built-in (afplay) FFmpeg so ffplay is on PATH FFmpeg so ffplay is on PATH
Audio effects SoX (optional) SoX (optional) SoX (optional)

See Installing FFmpeg and Installing SoX for platform-specific commands.

You will also want:

  • An LLM API key from OpenRouter (recommended), OpenAI, Google Gemini, or Anthropic. You can skip this and use fallback phrases only.
  • At least one TTS backend if you want spoken output instead of notifications only.
Backend Best for Requirements
Qwen3-TTS (recommended) Apple Silicon or NVIDIA GPU Python 3.13+, 16 GB RAM, ~8 GB disk
Chatterbox Any platform with GPU Python 3.10+, CUDA or MPS

The setup wizard auto-detects running TTS backends. If none are running yet, setup still completes, but you will only get text notifications and fallback phrases until you start one and rerun setup.

Can't run local TTS? Both backends require a GPU or Apple Silicon. Voxlert still works without TTS — you'll get text notifications and fallback phrases. Need help? Post in Setup help & troubleshooting.

2. Install and run setup

npx voxlert --onboard

The setup wizard configures:

  • LLM provider and API key
  • Voice pack downloads
  • Active voice pack
  • TTS backend
  • Platform hooks for Claude Code, Cursor, and Codex

For OpenClaw, install the separate OpenClaw plugin.

3. Start a TTS backend for spoken voice

Start Qwen3-TTS or Chatterbox, then run:

voxlert setup

This lets the wizard detect the backend and store it in config.

4. Verify

voxlert test "Hello"

You should hear a phrase and see a notification. If you do not hear speech, check that:

  • A TTS server is running
  • voxlert config shows the expected tts_backend

Visual notifications: Voxlert shows a popup with each phrase without extra install. On macOS you can use the custom overlay or system Notification Center. On Windows and Linux you get system toasts. Change it anytime with:

voxlert notification

From a git clone

Run npm install inside cli/, then use node src/cli.js or link it globally if you prefer. Config and cache live in ~/.voxlert (Windows: %USERPROFILE%\.voxlert).

Development

Run tests locally with:

npm test

For release-impacting changes, add a changeset before opening a PR:

npm run changeset

Supported Voices

The sc1-adjutant preview below uses the animated in-game portrait GIF from assets/sc1-adjutant.gif.

Pack ID Voice Source Status
sc1-adjutant SC1 Adjutant StarCraft ✅ Available
sc2-adjutant SC2 Adjutant StarCraft II ✅ Available
red-alert-eva EVA Command & Conquer: Red Alert ✅ Available
sc1-kerrigan SC1 Kerrigan StarCraft ✅ Available
sc1-kerrigan-infested SC1 Infested Kerrigan StarCraft ✅ Available
sc2-kerrigan-infested SC2 Infested Kerrigan StarCraft II ✅ Available
sc1-protoss-advisor Protoss Advisor StarCraft ✅ Available
ss1-shodan SHODAN System Shock ✅ Available
hl-hev-suit HEV Suit Half-Life ✅ Available

More coming soon: Request a voice

voxlert voice

Integrations

Claude Code

Installed through voxlert setup. Claude Code hook events are processed by:

voxlert hook

Cursor

Installed through voxlert setup, or add hooks manually in ~/.cursor/hooks.json:

{
  "version": 1,
  "hooks": {
    "sessionStart": [{ "command": "voxlert cursor-hook", "timeout": 10 }],
    "sessionEnd": [{ "command": "voxlert cursor-hook", "timeout": 10 }],
    "stop": [{ "command": "voxlert cursor-hook", "timeout": 10 }],
    "postToolUseFailure": [{ "command": "voxlert cursor-hook", "timeout": 10 }],
    "preCompact": [{ "command": "voxlert cursor-hook", "timeout": 10 }]
  }
}
Cursor Hook Event Voxlert Event Category
sessionStart SessionStart session.start
sessionEnd SessionEnd session.end
stop Stop task.complete
postToolUseFailure PostToolUseFailure task.error
preCompact PreCompact resource.limit

Restart Cursor after installing or changing hooks. See Cursor integration for details.

Codex

Voxlert uses Codex's notify config so that completed agent turns call:

voxlert codex-notify

voxlert setup can install or update the notify entry in ~/.codex/config.toml. See Codex integration.

OpenClaw

OpenClaw uses a separate plugin. See OpenClaw integration for installation, config, and troubleshooting.

Common Commands

voxlert setup                  # Interactive setup wizard
voxlert voice                  # Interactive voice pack picker
voxlert pack list              # List available voice packs
voxlert pack show              # Show active pack details
voxlert pack use <pack-id>     # Switch active voice pack
voxlert config                 # Show current configuration
voxlert config set <key> <val> # Set a config value
voxlert volume                 # Show or change playback volume
voxlert notification           # Choose popup / system / off
voxlert test "<text>"          # Run the full pipeline
voxlert log                    # Stream activity log
voxlert uninstall              # Remove installed integrations
voxlert help                   # Show full help

How It Works

flowchart TD
    A1[Claude Code Hook] --> B[voxlert.sh]
    A2[OpenClaw Plugin] --> B
    A3[Cursor Hook] --> B
    A4[Codex notify] --> B
    B --> C[src/voxlert.js]
    C --> D{Event type?}
    D -- "Contextual (e.g. Stop)" --> E[LLM<br><i>generate in-character phrase</i>]
    D -- "Other events" --> F[Fallback phrases<br><i>from voice pack</i>]
    E --> G{TTS backend?}
    F --> G
    G -- Chatterbox --> G1[Chatterbox TTS<br><i>local speech synthesis</i>]
    G -- Qwen3 --> G2[Qwen3-TTS<br><i>local speech synthesis</i>]
    G1 --> H[Audio processing<br><i>echo · normalize · post-process</i>]
    G2 --> H
    H --> I[(Cache<br><i>LRU, keyed by phrase + params</i>)]
    I --> J[Playback queue<br><i>serial via file lock</i>]
    J --> K[afplay / ffplay]
Loading
  1. A hook or notify event fires from Claude Code, Cursor, Codex, or OpenClaw.
  2. Voxlert maps it to an event category and loads the active voice pack.
  3. Contextual events such as task completion or tool failure can use the configured LLM to generate a short in-character phrase.
  4. Other events use predefined fallback phrases from the pack.
  5. The chosen phrase is synthesized by the configured TTS backend.
  6. Audio is optionally post-processed, cached, then played through a serialized queue.

What does it cost?

The LLM step (turning events into in-character phrases) uses a small, cheap model — not Claude. Each notification costs a fraction of a cent via OpenRouter, or zero if you use a local LLM. TTS and audio run entirely on your machine at zero cost. You can also skip the LLM entirely and use only fallback phrases from the voice pack (no API key needed).

Fully local mode (no cloud at all)

Voxlert supports local LLM servers for the phrase generation step. Run voxlert setup and choose "Local LLM (Ollama / LM Studio / llama.cpp)". Any OpenAI-compatible local server works:

Server Default URL
Ollama http://localhost:11434/v1
LM Studio http://localhost:1234/v1
llama.cpp server http://localhost:8080/v1

Combined with local TTS (Qwen3-TTS), this gives you a completely offline setup — no API keys, no cloud, no cost.

Configuration

Run voxlert config path to find config.json. You can edit it directly or use voxlert setup and voxlert config set.

Field Type Default Description
enabled boolean true Master on/off switch
llm_backend string "openrouter" LLM provider: openrouter, openai, gemini, anthropic, or local
llm_api_key string | null null API key for the chosen LLM provider
llm_model string | null null Model ID (null = provider default)
openrouter_api_key string | null null Legacy alias used when llm_backend is openrouter and llm_api_key is empty
openrouter_model string | null null Legacy alias used when llm_model is empty and backend is openrouter
chatterbox_url string "http://localhost:8004" Chatterbox TTS server URL
tts_backend string "qwen" TTS backend: qwen or chatterbox
active_pack string "sc1-kerrigan-infested" Active voice pack ID
volume number 1.0 Playback volume (0.0-1.0)
categories object Per-category enable/disable settings
logging boolean true Activity log in ~/.voxlert/voxlert.log
error_log boolean false Fallback/error log in ~/.voxlert/fallback.log

Event categories

Event categories apply across Claude Code, Cursor, Codex, and OpenClaw where the corresponding event exists.

Category Hook Event Description Default
session.start SessionStart New session begins on
session.end SessionEnd Session ends on
task.complete Stop Agent finishes a task on
task.acknowledge UserPromptSubmit User sends a prompt off
task.error PostToolUseFailure A tool call fails on
input.required PermissionRequest Agent needs user approval on
resource.limit PreCompact Context window nearing limit on
notification Notification General notification on

Omitted categories default to enabled. Set any category to false to disable it:

voxlert config set categories.task.complete true
voxlert config set categories.task.acknowledge false
voxlert config set categories.session.start true

Logging

  • Activity logging is on by default and writes one line per event to ~/.voxlert/voxlert.log.
  • Error logging is off by default and records fallback situations in ~/.voxlert/fallback.log.
  • Debug logging for hook sources is written to ~/.voxlert/hook-debug.log.

Useful commands:

voxlert log
voxlert log on
voxlert log off
voxlert log path
voxlert log error on
voxlert log error off
voxlert log error-path

You can also manage configuration interactively with the /voxlert-config slash command in Claude Code.

Integration behavior

  • voxlert setup installs hooks for Claude Code, Cursor, and Codex.
  • Re-run setup anytime to add a platform you skipped earlier.
  • voxlert uninstall removes Claude Code, Cursor, and Codex integration.
  • OpenClaw is managed separately through its plugin.
  • The global enabled flag disables processing everywhere; there is no separate per-integration toggle in config.json.

Full CLI Reference

voxlert setup                  # Interactive setup wizard (LLM, voice, TTS, hooks)
voxlert hook                   # Process hook event from stdin (Claude Code)
voxlert cursor-hook            # Process hook event from stdin (Cursor)
voxlert codex-notify           # Process notify payload from argv (Codex)
voxlert config                 # Show current configuration
voxlert config show            # Show current configuration
voxlert config set <k> <v>     # Set a config value (supports categories.X dot notation)
voxlert config path            # Print config file path
voxlert log                    # Stream activity log (tail -f style)
voxlert log path               # Print activity log file path
voxlert log error-path         # Print error/fallback log file path
voxlert log on | off           # Enable or disable activity logging
voxlert log error on | off     # Enable or disable error logging
voxlert voice                  # Interactive voice pack picker
voxlert pack list              # List available voice packs
voxlert pack show              # Show active pack details
voxlert pack use <pack-id>     # Switch active voice pack
voxlert volume                 # Show current volume and prompt for new value
voxlert volume <0-100>         # Set playback volume (0 = mute, 100 = max)
voxlert notification           # Choose notification style (popup / system / off)
voxlert test "<text>"          # Run full pipeline: LLM -> TTS -> audio playback
voxlert cost                   # Show accumulated token usage and estimated cost
voxlert cost reset             # Clear the usage log
voxlert uninstall              # Remove hooks from Claude Code, Cursor, and Codex, optionally config/cache
voxlert help                   # Show help
voxlert --version              # Show version

Platform Notes

  • Windows: Install Node.js and FFmpeg. Ensure the npm global bin directory is on PATH so hooks can find voxlert or voxlert.cmd.
  • Linux: Install Node and FFmpeg so ffplay is on PATH.
  • macOS: Playback uses the built-in afplay; install SoX if you want optional effects and processing.

Uninstall

voxlert uninstall
npm uninstall -g @settinghead/voxlert

This removes Voxlert hooks from Claude Code, Cursor, and Codex, the voxlert-config skill, and optionally your local config and cache in ~/.voxlert.

Advanced

See Creating Voice Packs for building your own character voice packs.

Credits

Need help?

Having trouble with setup? Post in the Setup help & troubleshooting Discussion.

License

MIT - see LICENSE.

About

LLM-generated voice notifications for Claude Code, Cursor, OpenAI Codex, and OpenClaw, spoken by game characters like the StarCraft Adjutant, Kerrigan, C&C EVA, SHODAN, and more.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors