AI-powered knowledge management with chat, document processing, and RAG (Retrieval-Augmented Generation).
- Docker & Docker Compose
- Git
git clone <repository-url>
cd synaplan-dev
# Quick start (models download on-demand)
docker compose up -d
# Or: Pre-download AI models during startup
AUTO_DOWNLOAD_MODELS=true docker compose up -dWhat happens automatically:
- β
Creates
.envfrom.env.example(Docker Compose variables) - β
Creates
backend/.envandfrontend/.env(app-specific configs) - β Installs dependencies (Composer, npm)
- β Generates JWT keypair for authentication
- β Creates database schema (migrations)
- β Loads test users and fixtures (if database is empty)
- β Starts all services
- β System ready in ~40 seconds!
First startup takes ~40 seconds because:
- Database initialization: ~5s
- Schema creation: ~2s
- Fixtures loading: ~3s
- Cache warming: ~2s
- Total: ~40s (one-time setup)
Subsequent restarts take ~15 seconds (no fixtures needed).
AI Model Download Behavior:
By default, AI models are NOT downloaded automatically. They download on-demand when first used.
Option 1: Quick Start (Recommended for Development)
docker compose up -d- β‘ Fast startup: ~40 seconds (first run), ~15s (subsequent)
- π₯ Models: Download automatically when you first send a chat message (~2-3 minutes)
- π‘ Best for: Development, testing, quick demos
- π― System is immediately usable for login, file uploads, user management
Option 2: Pre-download Models
AUTO_DOWNLOAD_MODELS=true docker compose up -d- π Backend ready: Still ~40 seconds
- π¦ Models download in background:
mistral:7b(4.1GB) +bge-m3(670MB) - β±οΈ Total download time: ~5-10 minutes (depends on internet speed)
- β AI chat ready immediately after models finish downloading
- π‘ Best for: Production, demos where AI must work immediately
Check download progress:
docker compose logs -f backend | grep -i "model\|background"When to use which option:
- Development/Testing: Use default (on-demand download)
- Production/Demos: Use
AUTO_DOWNLOAD_MODELS=true - CI/CD: Build a custom image with pre-downloaded models
| Service | URL | Description |
|---|---|---|
| Frontend | http://localhost:5173 | Vue.js Web App |
| Backend API | http://localhost:8000 | Symfony REST API |
| phpMyAdmin | http://localhost:8082 | Database Management |
| MailHog | http://localhost:8025 | Email Testing |
| Ollama | http://localhost:11435 | AI Models API |
| Password | Level | |
|---|---|---|
| admin@synaplan.com | admin123 | BUSINESS |
| demo@synaplan.com | demo123 | PRO |
| test@example.com | test123 | NEW |
The system includes a full RAG (Retrieval-Augmented Generation) pipeline:
- Upload: Multi-level processing (Extract Only, Extract + Vectorize, Full Analysis)
- Extraction: Tika (documents), Tesseract OCR (images), Whisper (audio)
- Vectorization: bge-m3 embeddings (1024 dimensions) via Ollama
- Storage: Native MariaDB VECTOR type with VEC_DISTANCE_COSINE similarity search
- Search: Semantic search UI with configurable thresholds and group filtering
- Sharing: Private by default, public sharing with optional expiry
Audio files are automatically transcribed using Whisper.cpp when uploaded:
- Supported formats: mp3, wav, ogg, m4a, opus, flac, webm, aac, wma
- Automatic conversion: FFmpeg converts all audio to optimal format (16kHz mono WAV)
- Models: tiny, base (default), small, medium, large - configurable via
.env - Setup:
- Docker: Pre-installed, download models on first run
- Local: Install whisper.cpp and FFmpeg, configure paths in
.env
Environment variables (see .env.example):
WHISPER_BINARY=/usr/local/bin/whisper # Whisper.cpp binary path
WHISPER_MODELS_PATH=/var/www/html/var/whisper # Model storage
WHISPER_DEFAULT_MODEL=base # tiny|base|small|medium|large
WHISPER_ENABLED=true # Enable/disable transcription
FFMPEG_BINARY=/usr/bin/ffmpeg # FFmpeg for audio conversionIf Whisper is unavailable, audio processing is skipped gracefully (no errors).
SynaPlan integrates with Meta's official WhatsApp Business API for bidirectional messaging.
- Create WhatsApp Business Account: Meta Business Suite
- Get Credentials: Access Token, Phone Number ID, Business Account ID
- Set Environment Variables:
WHATSAPP_ACCESS_TOKEN=your_access_token
WHATSAPP_PHONE_NUMBER_ID=your_phone_number_id
WHATSAPP_BUSINESS_ACCOUNT_ID=your_business_account_id
WHATSAPP_WEBHOOK_VERIFY_TOKEN=your_verify_token
WHATSAPP_ENABLED=true- Configure Webhook in Meta:
- Callback URL:
https://your-domain.com/api/v1/webhooks/whatsapp - Verify Token: Same as
WHATSAPP_WEBHOOK_VERIFY_TOKEN - Subscribe to:
messages
- Callback URL:
Users must verify their phone number via WhatsApp to unlock full features:
- ANONYMOUS (not verified): 10 messages, 2 images (very limited)
- NEW (verified): 50 messages, 5 images, 2 videos
- PRO/TEAM/BUSINESS: Full subscription limits
Verification Flow:
- User enters phone number in web interface
- 6-digit code sent via WhatsApp
- User confirms code
- Phone linked to account β full access
- User can remove link anytime
- β Text Messages (send & receive)
- β Media Messages (images, audio, video, documents)
- β Audio Transcription (via Whisper.cpp)
- β Phone Verification System
- β Full AI Pipeline (PreProcessor β Classifier β Handler)
- β Rate Limiting per subscription level
- β Message status tracking
WhatsApp User β Meta Webhook β /api/v1/webhooks/whatsapp
β Message Entity β PreProcessor (files, audio transcription)
β Classifier (sorting, tool detection) β InferenceRouter
β AI Handler (Chat/RAG/Tools) β Response β WhatsApp
SynaPlan supports email-based AI conversations with smart chat context management.
- General:
smart@synaplan.com- Creates general chat conversation - Keyword-based:
smart+keyword@synaplan.com- Creates dedicated chat context- Example:
smart+project@synaplan.comfor project discussions - Example:
smart+support@synaplan.comfor support tickets
- Example:
- β Automatic User Detection: Registered users get their own rate limits
- β Anonymous Email Support: Unknown senders get ANONYMOUS limits
- β Chat Context: Email threads become chat conversations
- β
Spam Protection:
- Max 10 emails/hour per unknown address
- Automatic blacklisting for spammers
- β Email Threading: Replies stay in the same chat context
- β Unified Rate Limits: Same limits across Email, WhatsApp, Web
User sends email to smart@synaplan.com
β System checks if email is registered user
β If yes: Use user's rate limits
β If no: Create anonymous user with ANONYMOUS limits
β Parse keyword from recipient (smart+keyword@)
β Find or create chat context
β Process through AI pipeline
β Send response via email (TODO: requires SMTP)
- Registered User Email = User's subscription limits
- Unknown Email = ANONYMOUS limits (10 messages total)
- Spam Detection: Auto-blacklist after 10 emails/hour
The API also supports other external channels via webhooks authenticated with API keys:
-
Create API Key:
POST /api/v1/apikeys(requires JWT login){ "name": "Email Integration", "scopes": ["webhooks:*"] }Returns:
sk_abc123...(store securely - shown only once!) -
Use Webhooks: Send messages via API key authentication
- Header:
X-API-Key: sk_abc123...or - Query:
?api_key=sk_abc123...
- Header:
- Email:
POST /api/v1/webhooks/email - WhatsApp:
POST /api/v1/webhooks/whatsapp - Generic:
POST /api/v1/webhooks/generic
Example (Email):
curl -X POST https://your-domain.com/api/v1/webhooks/email \
-H "X-API-Key: sk_your_key" \
-H "Content-Type: application/json" \
-d '{
"from": "user@example.com",
"subject": "Question",
"body": "Hello, how can I help?"
}'Response: AI-generated reply based on message content
GET /api/v1/apikeys- List keysPOST /api/v1/apikeys- Create keyPATCH /api/v1/apikeys/{id}- Update (activate/deactivate)DELETE /api/v1/apikeys/{id}- Revoke key
synaplan-dev/
βββ _devextras/ # Development extras
βββ _docker/ # Docker configurations
β βββ backend/ # Backend Dockerfile & scripts
β βββ frontend/ # Frontend Dockerfile & nginx
βββ backend/ # Symfony Backend (PHP 8.3)
βββ frontend/ # Vue.js Frontend
βββ docker-compose.yml # Main orchestration
Environment files are auto-generated on first start:
backend/.env.local(auto-created by backend container, only if not exists)frontend/.env.docker(auto-created by frontend container)
Note: .env.local is never overwritten. To reset: delete the file and restart container.
Example files provided:
backend/.env.docker.example(reference)frontend/.env.docker.example(reference)
# View logs
docker compose logs -f
# Restart services
docker compose restart backend
docker compose restart frontend
# Reset database (deletes all data!)
docker compose down -v
docker compose up -d
# Run migrations
docker compose exec backend php bin/console doctrine:migrations:migrate
# Install packages
docker compose exec backend composer require <package>
docker compose exec frontend npm install <package>Models are downloaded on-demand when first used:
- mistral:7b - Main chat model (4.1 GB) - Downloaded on first chat
- bge-m3 - Embedding model for RAG (2.2 GB) - Downloaded when using document search
To download models during startup (in background):
AUTO_DOWNLOAD_MODELS=true docker compose up -dThe backend starts immediately while models download in parallel. Monitor progress:
docker compose logs -f backendYou'll see messages like:
[Background] β³ Model 'mistral:7b' download in progress...[Background] β Model 'mistral:7b' downloaded successfully!
- β AI Chat: Multiple providers (Ollama, OpenAI, Anthropic, Groq, Gemini)
- β RAG System: Semantic search with MariaDB VECTOR + bge-m3 embeddings (1024 dim)
- β Document Processing: PDF, Word, Excel, Images (Tika + OCR)
- β Audio Transcription: Whisper.cpp integration
- β File Management: Upload, share (public/private), organize with expiry
- β App Modes: Easy mode (simplified) and Advanced mode (full features)
- β Security: Private files by default, secure sharing with tokens
- β Multi-user: Role-based access with JWT authentication
- β Responsive UI: Vue.js 3 + TypeScript + Tailwind CSS
See LICENSE