Text-to-Speech with multiple backends for scientific workflows
Full Documentation · pip install scitex-audio
| # | Problem | Solution |
|---|---|---|
| 1 | TTS libraries balkanized -- ElevenLabs REST + gTTS HTTP + pyttsx3 native each have different APIs, failure modes, voice formats | stx.audio.speak(text) -- one call; automatic fallback ElevenLabs → LuxTTS → gTTS → pyttsx3 when upstream fails or API key is missing |
| 2 | Offline TTS sounds robotic -- espeak is a pi-era stopgap | LuxTTS -- offline 48 kHz voice-cloning on CPU; near-realtime after first-load warmup |
| 3 | Scripts can't alert users from a headless session -- print gets lost in logs |
MCP tool audio_speak -- agents and long-running scripts report status audibly to the operator, with desktop notification fallback |
Scientific workflows increasingly rely on AI agents that often run in parallel on remote servers or headless environments. Researchers may prefer auditory feedback over text — for experiment completion notifications, error alerts, or accessibility — but have no direct access to audio hardware.
SciTeX Audio provides a unified TTS interface with automatic backend fallback and smart local/remote routing to speakers on your desktop. It works on local machines, remote servers (via relay), and WSL environments with automatic audio path detection.
| Backend | Quality | Cost | Internet | Offline | Default Speed |
|---|---|---|---|---|---|
| ElevenLabs | High | Paid | Required | No | 1.2x |
| LuxTTS | High | Free | First download | Yes | 2.0x |
| Google TTS | Good | Free | Required | No | 1.5x |
| System TTS | Basic | Free | No | Yes | 150 wpm |
Table 1. Supported TTS backends. The fallback order (elevenlabs → luxtts → gtts → pyttsx3) ensures the best available quality is always used.
Requires Python >= 3.10.
pip install scitex-audioInstall with specific backends:
pip install scitex-audio[gtts] # Google TTS
pip install scitex-audio[pyttsx3] # System TTS (+ apt install espeak-ng)
pip install scitex-audio[elevenlabs] # ElevenLabs
pip install scitex-audio[luxtts] # LuxTTS (voice cloning, offline)
pip install scitex-audio[all] # Everythingfrom scitex_audio import speak, available_backends
# Check what's available
print(available_backends()) # e.g., ['gtts', 'pyttsx3']
# Speak with auto-selected backend
speak("Hello from SciTeX Audio!")
# Choose a specific backend
speak("Bonjour", backend="gtts", voice="fr")
# Save without playing
speak("Save this", output_path="output.mp3", play=False)Python API
import scitex_audio
scitex_audio.speak("Hello!") # auto backend
scitex_audio.speak("Fast", backend="gtts", speed=1.5)
scitex_audio.available_backends() # list backends
scitex_audio.check_wsl_audio() # WSL audio status
scitex_audio.generate_bytes("As bytes") # raw MP3 bytes
scitex_audio.stop_speech() # kill playback
tts = scitex_audio.get_tts("gtts") # get engine
tts.speak("With engine", voice="fr")CLI Commands
scitex-audio --help-recursive # Show all commands
scitex-audio speak "Hello world" # Speak text
scitex-audio speak "Bonjour" -b gtts -v fr
scitex-audio backends # List backends
scitex-audio check # Audio status (WSL)
scitex-audio stop # Stop playback
scitex-audio relay --port 31293 # Start relay server
scitex-audio list-python-apis # List Python API tree
scitex-audio mcp list-tools # List MCP toolsMCP Server — for AI Agents
AI agents can speak through the MCP protocol for notifications and accessibility. The agent generates text, the MCP server synthesizes speech on the host machine, and audio plays through the user's local speakers — even when the agent runs on a remote server (via the relay on port 31293).
| Tool | Description |
|---|---|
audio_speak |
Convert text to speech with backend fallback |
list_backends |
List available TTS backends and status |
check_audio_status |
Check WSL audio connectivity |
announce_context |
Announce current directory and git branch |
Table 2. Four MCP tools available for AI-assisted audio. All tools accept JSON parameters and return JSON results.
Add .mcp.json to your project root. Use SCITEX_AUDIO_ENV_SRC to load all configuration from a .src file — this keeps .mcp.json static across environments:
{
"mcpServers": {
"scitex-audio": {
"command": "scitex-audio",
"args": ["mcp", "start"],
"env": {
"SCITEX_AUDIO_ENV_SRC": "${SCITEX_AUDIO_ENV_SRC}"
}
}
}
}Then switch environments via your shell profile:
# Local machine (has speakers)
export SCITEX_AUDIO_ENV_SRC=~/.scitex/audio/local.src
# Remote server (no speakers, uses relay)
export SCITEX_AUDIO_ENV_SRC=~/.scitex/audio/remote.srcGenerate a template .src file:
scitex-audio env-template -o ~/.scitex/audio/local.srcExample local.src:
export SCITEX_AUDIO_MODE=local
export SCITEX_AUDIO_RELAY_PORT=31293
export SCITEX_AUDIO_ELEVENLABS_API_KEY="your-key-here"Example remote.src:
export SCITEX_AUDIO_MODE=remote
export SCITEX_AUDIO_RELAY_PORT=31293
export SCITEX_AUDIO_ELEVENLABS_API_KEY="your-key-here"Or install globally:
scitex-audio mcp installSkills — for AI Agent Discovery
Skills provide workflow-oriented guides that AI agents query to discover capabilities and usage patterns.
scitex-audio skills list # List available skill pages
scitex-audio skills get SKILL # Show main skill page
scitex-dev skills export --package scitex-audio # Export to Claude Code| Skill | Content |
|---|---|
quick-start |
Basic usage, first call, return values |
available-backends |
All TTS backends, capabilities, install commands |
smart-routing |
Auto/local/remote modes, relay server, SSH tunneling |
cli-commands |
Complete CLI reference |
mcp-tools-for-ai-agents |
MCP tools and installation |
common-workflows |
Notification patterns, multi-backend, save audio |
Remote Audio Relay
When agents run on remote servers (NAS, cloud, HPC), they have no speakers. The relay server solves this: the local machine (with speakers) runs a lightweight HTTP server, and the remote agent sends speech requests over an SSH tunnel.
Remote Server (NAS/Cloud) Local Machine (has speakers)
┌─────────────────────┐ ┌──────────────────────┐
│ AI Agent │ │ Relay Server │
│ speak("Hello") │ │ scitex-audio relay │
│ ↓ │ │ ↓ │
│ POST /speak ────────┼── SSH ─────┼→ TTS engine │
│ (port 31293) │ tunnel │ ↓ │
│ │ │ 🔊 Speakers │
└─────────────────────┘ └──────────────────────┘
Step 1: Start relay on local machine (has speakers)
scitex-audio relay --port 31293Step 2: SSH tunnel from local to remote
Add to your ~/.ssh/config:
Host my-server
HostName 192.168.0.69
User myuser
RemoteForward 31293 127.0.0.1:31293 # Audio relayThen connect: ssh my-server. The tunnel maps remote port 31293 back to your local relay.
Step 3: Configure remote environment
On the remote server, set:
export SCITEX_AUDIO_MODE=remote
export SCITEX_AUDIO_RELAY_PORT=31293
# URL is auto-detected from the tunnel (localhost:31293)Now speak("Hello") on the remote server plays audio on your local speakers.
| Endpoint | Method | Description |
|---|---|---|
/speak |
POST | Play TTS ({"text": "...", "backend": "gtts"}) |
/health |
GET | Health check (returns {"status": "ok"}) |
/list_backends |
GET | List available backends |
Add to your shell profile (e.g., ~/.bashrc) on the local machine (the one with speakers):
export SCITEX_AUDIO_RELAY_PORT=31293
export SCITEX_AUDIO_MODE=local
_audio_health_ok() {
local port="${SCITEX_AUDIO_RELAY_PORT:-31293}"
curl -sf --connect-timeout 2 "http://localhost:$port/health" >/dev/null 2>&1
}
_start_audio_relay() {
local port="${SCITEX_AUDIO_RELAY_PORT:-31293}"
# Already healthy -> skip
_audio_health_ok && return
# Start relay in background
scitex-audio relay --port "$port" --force &>/dev/null &
disown
}
_start_audio_relayOn the remote server (NAS, cloud, HPC), add to the shell profile:
export SCITEX_AUDIO_RELAY_PORT=31293
export SCITEX_AUDIO_MODE=remote
# Relay URL is auto-detected from the SSH RemoteForward tunnelWith this setup, every SSH session automatically has audio routed back to your speakers. AI agents (Claude Code, etc.) running on the remote server call speak() and the audio plays locally.
Environment Variables
| Variable | Default | Description |
|---|---|---|
SCITEX_AUDIO_MODE |
auto |
Audio mode: local, remote, or auto |
SCITEX_AUDIO_RELAY_URL |
(auto) | Full relay URL (e.g., http://localhost:31293) |
SCITEX_AUDIO_RELAY_HOST |
(none) | Relay host (combined with port to build URL) |
SCITEX_AUDIO_RELAY_PORT |
31293 |
Relay server port |
SCITEX_AUDIO_HOST |
0.0.0.0 |
Relay server bind host |
SCITEX_AUDIO_ELEVENLABS_API_KEY |
(none) | ElevenLabs API key |
SCITEX_DIR |
~/.scitex |
Base directory for audio cache files |
SCITEX_CLOUD |
(none) | Set to true for browser relay mode (OSC escape) |
Table 3. Environment variables. Port 31293 encodes "sa-i-te-ku-su" (サイテクス) in Japanese phone keypad mapping.
auto(default): Checks local audio availability. If suspended/unavailable and relay URL detected, routes to relay. Otherwise uses local.local: Always use local TTS engine and speakers.remote: Always send speech to relay server. Fails if relay unreachable.
SciTeX Audio is part of SciTeX.
The SciTeX system follows the Four Freedoms for Research below, inspired by the Free Software Definition:
Four Freedoms for Research
- The freedom to run your research anywhere — your machine, your terms.
- The freedom to study how every step works — from raw data to final manuscript.
- The freedom to redistribute your workflows, not just your papers.
- The freedom to modify any module and share improvements with the community.
AGPL-3.0 — because we believe research infrastructure deserves the same freedoms as the software it runs on.
- LuxTTS — open-source, offline TTS engine with voice cloning support