NomiAssistantTG connects your Telegram bot directly to Nomi.ai. Send text or voice messages to your Nomi, and receive replies instantly. Built with a focus on code quality, scalability, and ease of deployment.
graph LR
A[π± User] -->|Text/Voice| B[π€ Telegram Bot]
B -->|Audio| C[ποΈ Vosk Service]
C -->|Text| D[π§ Service Layer]
B -->|Text| D
D -->|Request| E[π Nomi API]
E -->|Reply| D
D -->|Response| B
style A fill:#4A90E2,stroke:#2c3e50,stroke-width:2px,color:#fff
style B fill:#2CA5E0,stroke:#2c3e50,stroke-width:2px,color:#fff
style C fill:#50C878,stroke:#2c3e50,stroke-width:2px,color:#fff
style D fill:#B19CD9,stroke:#2c3e50,stroke-width:2px,color:#fff
style E fill:#E92063,stroke:#2c3e50,stroke-width:2px,color:#fff
| Feature | Description | Status |
|---|---|---|
| π¬ Direct Chat | Real-time messaging with your Nomi | β |
| ποΈ Voice Messages | Offline STT conversion using Vosk & FFmpeg | β |
| π³ Dockerized | One-command deploy on any OS | β |
| π‘οΈ Type Safety | 100% Pydantic validation for API responses | β |
| π§ͺ Tested | High coverage with Pytest & Respx | β |
| π Retry Logic | Smart backoff for 429/500 errors | β |
| π― Clean Architecture | Separation of concerns (handlers, services, clients) | β |
Tip
This is the recommended way to run the bot.
# 1. Clone the repository
git clone https://github.com/AmaLS367/Nomi_ai_tg.git
cd Nomi_ai_tg
# 2. Configure environment
cp .env.example .env
# Edit .env and set your TELEGRAM_BOT_TOKEN and NOMI_API_KEY
# 3. Start the services
docker-compose up -d --buildThe Docker setup includes:
- Python 3.11 slim base image
- FFmpeg pre-installed for audio processing
- Vosk model auto-downloaded (small English model)
- Non-root user for security
- Volume mounts for persistent data and logs
π οΈ Manual Setup Guide
If you want to run it without Docker (e.g., for debugging):
- Python 3.11+
- FFmpeg (must be in PATH or set via
FFMPEG_BIN) - Git
# 1. Clone repository
git clone https://github.com/AmaLS367/Nomi_ai_tg.git
cd Nomi_ai_tg
# 2. Create virtual environment
python -m venv .venv
# Windows
.\.venv\Scripts\Activate.ps1
# Linux/Mac
source .venv/bin/activate
# 3. Install dependencies
pip install -e .[dev]
# 4. Configure environment
cp .env.example .env
# Edit .env with your tokens
# 5. Download Vosk model (optional, for voice support)
# Download from https://alphacephei.com/vosk/models
# Extract to ./models/vosk-model-small-en-us-0.15
# Set VOSK_MODEL_PATH=./models/vosk-model-small-en-us-0.15 in .envpython run.py| Variable | Description | Example |
|---|---|---|
TELEGRAM_BOT_TOKEN |
Token from @BotFather | 123456:ABC-DEF... |
NOMI_API_KEY |
API key from Nomi.ai Integration Settings | sk_live_xxx... |
| Variable | Description | Default |
|---|---|---|
NOMI_DEFAULT_NOMI_UUID |
Specific Nomi ID (auto-selects first if not set) | None |
LOG_LEVEL |
Logging verbosity | INFO |
REQUEST_TIMEOUT_SEC |
HTTP timeout for Nomi API | 30 |
RATE_LIMIT_RPS |
Requests per second limit | 0.4 |
VOSK_MODEL_PATH |
Path to Vosk model folder | /app/models/vosk-model |
FFMPEG_BIN |
Explicit FFmpeg binary path | auto-detected |
MAX_AUDIO_BYTES |
Max voice message size (bytes) | 10485760 (10 MB) |
You can query your account to find your Nomi IDs:
# Using curl
curl -H "Authorization: YOUR_NOMI_API_KEY" https://api.nomi.ai/v1/nomis
# Using PowerShell
$headers = @{ Authorization = "YOUR_NOMI_API_KEY" }
Invoke-RestMethod -Uri "https://api.nomi.ai/v1/nomis" -Headers $headersCopy the id field of your desired Nomi and set it in .env as NOMI_DEFAULT_NOMI_UUID.
| Command | Description |
|---|---|
/start |
Initialize bot and show welcome message |
/status |
Display currently active Nomi (name and UUID) |
/help |
Show available commands and usage tips |
- Text messages: Simply send any text to chat with your Nomi
- Voice messages: Record and send voice notes (automatically transcribed)
- Images/Files: Send URLs in messages or captions (Nomi API doesn't support direct uploads)
nomi_tg_companion/
βββ .github/
β βββ workflows/ # CI/CD pipelines
β βββ quality.yml # Linting, typing, testing
β βββ docker.yml # Docker build validation
βββ src/
β βββ app.py # Main application entry
β βββ bot/
β β βββ app_bot.py # Bot instance creation
β β βββ handlers/ # Message and command handlers
β β βββ commands.py
β β βββ messages.py
β βββ core/
β β βββ config.py # Pydantic settings
β β βββ errors.py # Custom exceptions
β β βββ logging.py # Logging setup
β βββ nomi/
β β βββ client.py # HTTP client with retry logic
β β βββ schemas.py # Pydantic models for API
β β βββ service.py # Business logic layer
β βββ stt/
β βββ vosk_stt.py # Speech-to-text service
βββ tests/ # Pytest suite (80% coverage)
β βββ conftest.py
β βββ test_config.py
β βββ test_nomi_client.py
β βββ test_nomi_service.py
β βββ test_stt.py
βββ docker-compose.yml # Docker orchestration
βββ Dockerfile # Production image
βββ pyproject.toml # Dependencies & tool configs
βββ run.py # Simple entry point
We use strict quality gates. Before committing, ensure checks pass:
# Run Linter (Ruff)
ruff check .
ruff format .
# Run Type Checker (Mypy)
export PYTHONPATH=src
mypy --config-file pyproject.toml src/core src/nomi src/stt
# Run Tests (Pytest)
pytest
# With coverage report
pytest --cov=src --cov-report=term-missingSee CONTRIBUTING.md for detailed guidelines.
FFmpeg not found
Symptom: Voice messages fail with "FFmpeg not found"
Solution:
- Docker: FFmpeg is pre-installed, no action needed
- Local:
- Windows: Download from ffmpeg.org, add to PATH, or set
FFMPEG_BINin.env - Linux:
sudo apt install ffmpeg - Mac:
brew install ffmpeg
- Windows: Download from ffmpeg.org, add to PATH, or set
Vosk model not found
Symptom: Voice transcription fails with "VOSK_MODEL_PATH invalid"
Solution:
- Download a model from Vosk Models
- Extract to
./models/vosk-model-small-en-us-0.15 - Set
VOSK_MODEL_PATH=./models/vosk-model-small-en-us-0.15in.env
Bot not responding
Symptom: Messages sent to bot receive no reply
Checklist:
- β
Verify
TELEGRAM_BOT_TOKENis correct - β Check bot is not paused in @BotFather
- β
Ensure
NOMI_API_KEYis valid (test with curl) - β
Check logs:
docker-compose logs -for./data/logs/app.log
429 Rate Limit errors
Symptom: "Rate limit exceeded" messages
Solution:
- The bot has built-in retry logic with exponential backoff
- Adjust
RATE_LIMIT_RPSin.env(default: 0.4 = ~1 request per 2.5 seconds) - Wait a few seconds between messages
| Category | Technology | Purpose |
|---|---|---|
| Framework | Aiogram 3 | Async Telegram bot framework |
| HTTP Client | HTTPX | Async HTTP with retry logic |
| Validation | Pydantic V2 | Type-safe data models |
| STT | Vosk | Offline speech recognition |
| Audio | FFmpeg | Audio format conversion |
| Testing | Pytest + Respx | Unit tests with HTTP mocking |
| Linting | Ruff | Fast Python linter & formatter |
| Type Checking | Mypy | Static type analysis |
Contributions are welcome! Please read CONTRIBUTING.md for:
- Code quality standards
- Testing requirements
- Architecture guidelines
- Commit message conventions
This software is proprietary and confidential. Unauthorized copying, distribution, or use is strictly prohibited. To obtain a commercial license or usage rights, please contact the author.