This document provides comprehensive information about the OpenWebUI integration in the AI-Agent-Platform project.
OpenWebUI (formerly Ollama WebUI) is an extensible, feature-rich, and user-friendly self-hosted web interface designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs.
- 🎨 Intuitive Interface: Clean, modern chat interface similar to ChatGPT
- 🔌 Multiple Backend Support: Works with Ollama, OpenAI API, and other compatible APIs
- 📱 Responsive Design: Works seamlessly on desktop, tablet, and mobile devices
- 🔐 User Management: Built-in authentication and multi-user support
- 💬 Chat Features: Multiple conversations, message editing, regeneration
- 📝 Rich Content: Markdown support, code syntax highlighting, LaTeX rendering
- 🌐 Multilingual: Interface available in multiple languages
- 🔒 Privacy-Focused: Can run completely offline
- 📦 Model Management: Download, update, and manage AI models
- 🎯 Customizable: Themes, settings, and configuration options
- Ubuntu/Debian-based VPS or server
- Root or sudo access
- Minimum 2GB RAM recommended
- 10GB+ free disk space
# Clone the repository (if not already done)
git clone https://github.com/wasalstor-web/AI-Agent-Platform.git
cd AI-Agent-Platform
# Run the setup script
./setup-openwebui.shThe script will:
- Check for Docker and Docker Compose
- Install them if not present
- Set up OpenWebUI container
- Optionally install Ollama
- Configure Nginx reverse proxy (optional)
For non-interactive installation (useful for CI/CD or scripts):
./setup-openwebui.sh installConfigure OpenWebUI by setting environment variables in .env file:
# Copy example configuration
cp .env.example .env
# Edit the configuration
nano .envAvailable configuration options:
| Variable | Default | Description |
|---|---|---|
OPENWEBUI_PORT |
3000 | Port where OpenWebUI will be accessible |
OPENWEBUI_HOST |
0.0.0.0 | Host interface to bind to |
OPENWEBUI_VERSION |
latest | Docker image version to use |
OLLAMA_API_BASE_URL |
http://localhost:11434 | Ollama API endpoint |
WEBUI_SECRET_KEY |
(generated) | Secret key for sessions (generate with openssl rand -hex 32) |
The setup script creates a docker-compose.yml file in /opt/openwebui/:
version: '3.8'
services:
openwebui:
image: ghcr.io/open-webui/open-webui:latest
container_name: openwebui
restart: unless-stopped
ports:
- "3000:8080"
volumes:
- openwebui_data:/app/backend/data
environment:
- OLLAMA_API_BASE_URL=http://localhost:11434
- WEBUI_SECRET_KEY=your-secret-key
- WEBUI_AUTH=true
networks:
- openwebui_networkAfter installation, access OpenWebUI at:
Local Access:
http://localhost:3000
Remote Access:
http://your-vps-ip:3000
Domain Access (with Nginx):
http://ai.yourdomain.com
https://ai.yourdomain.com # with SSL
-
Create Admin Account
- Navigate to OpenWebUI URL
- Click "Sign Up"
- Enter email and password
- First user becomes admin automatically
-
Configure Models
- Go to Settings → Models
- If using Ollama, models will appear automatically
- For OpenAI API, add your API key
-
Start Chatting
- Select a model from dropdown
- Type your message
- Press Enter or click Send
The setup-openwebui.sh script provides a management interface:
./setup-openwebui.shMenu options:
- Install OpenWebUI - Full installation process
- Show Status - Display current status
- Show Logs - View real-time logs
- Restart - Restart the service
- Stop Service - Stop OpenWebUI
- Configure Nginx - Setup reverse proxy
- Install Ollama - Install Ollama locally
- Exit
Direct commands for common operations:
# Check status
./setup-openwebui.sh status
# View logs
./setup-openwebui.sh logs
# Restart service
./setup-openwebui.sh restart
# Stop service
./setup-openwebui.sh stopFor advanced users who prefer direct Docker commands:
# Navigate to OpenWebUI directory
cd /opt/openwebui
# View status
docker compose ps
# View logs
docker compose logs -f
# Restart
docker compose restart
# Stop
docker compose down
# Start
docker compose up -d
# Update to latest version
docker compose pull
docker compose up -dOpenWebUI is integrated into the Smart Deploy menu:
./smart-deploy.shSelect option 10) إدارة OpenWebUI to access OpenWebUI management.
- Access via domain name instead of IP:PORT
- SSL/HTTPS support
- Better security with proper headers
- Professional appearance
-
Automatic Setup (Recommended)
./setup-openwebui.sh # Select option 6: Configure Nginx -
Manual Setup
Create Nginx configuration:
sudo nano /etc/nginx/sites-available/openwebui
Add configuration:
server { listen 80; server_name ai.yourdomain.com; location / { proxy_pass http://localhost:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # WebSocket support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # Timeouts proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s; } }
Enable and reload:
sudo ln -s /etc/nginx/sites-available/openwebui /etc/nginx/sites-enabled/ sudo nginx -t sudo systemctl reload nginx
Setup SSL with Certbot:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d ai.yourdomain.comOllama is a lightweight, easy-to-use tool for running large language models locally. It supports models like Llama 2, Mistral, Codellama, and many others.
Option 1: During OpenWebUI Setup
- The setup script will ask if you want to install Ollama
- Select "Yes" when prompted
Option 2: Manual Installation
curl -fsSL https://ollama.ai/install.sh | shOption 3: Via Setup Script
./setup-openwebui.sh
# Select option 7: Install OllamaAfter installing Ollama, download models:
# List available models
ollama list
# Pull a model
ollama pull llama2
ollama pull mistral
ollama pull codellama
# Run a model (test)
ollama run llama2
# Remove a model
ollama rm llama2| Model | Size | Use Case |
|---|---|---|
| llama2 | 3.8GB | General purpose, good balance |
| llama2:13b | 7.4GB | Better quality, needs more RAM |
| mistral | 4.1GB | Fast and capable, great for chat |
| codellama | 3.8GB | Code generation and understanding |
| phi | 1.6GB | Small, fast, efficient |
| neural-chat | 4.1GB | Conversational AI |
Ollama runs an API server on port 11434:
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Generate text
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "Hello, how are you?"
}'Check Docker status:
sudo systemctl status docker
sudo systemctl start dockerCheck container logs:
cd /opt/openwebui
docker compose logsVerify port is available:
sudo netstat -tlnp | grep 3000Check firewall:
sudo ufw status
sudo ufw allow 3000/tcpVerify container is running:
docker ps | grep openwebuiCheck Ollama is running:
sudo systemctl status ollama
sudo systemctl start ollamaTest Ollama API:
curl http://localhost:11434/api/tagsUpdate OpenWebUI environment:
cd /opt/openwebui
nano docker-compose.yml
# Update OLLAMA_API_BASE_URL if needed
docker compose restartCheck system resources:
free -h # Memory
df -h # Disk space
htop # CPU and processesReduce model size:
- Use smaller models (phi, mistral instead of llama2:13b)
- Limit concurrent users
- Increase server resources
Reset OpenWebUI data:
cd /opt/openwebui
docker compose down -v # WARNING: Deletes all data
docker compose up -dBackup data:
docker cp openwebui:/app/backend/data ./backup-dataRestore data:
docker cp ./backup-data openwebui:/app/backend/data
docker compose restartThe platform includes automated checks for OpenWebUI services:
# Full VPS check including OpenWebUI
./deploy.sh --host your-vps.comThe check will verify:
- ✓ Port 3000 (OpenWebUI) is accessible
- ✓ Port 11434 (Ollama) is accessible
- ✓ OpenWebUI web interface is responding
- ✓ Ollama API is responding
- Use a strong password
- Don't share admin credentials
- Create separate user accounts for each person
- Always setup SSL with Certbot
- Redirect HTTP to HTTPS
- Use strong cipher suites
# Allow only necessary ports
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp # SSH
sudo ufw allow 80/tcp # HTTP
sudo ufw allow 443/tcp # HTTPS
sudo ufw enable# Update system
sudo apt update && sudo apt upgrade
# Update Docker images
cd /opt/openwebui
docker compose pull
docker compose up -d
# Update Ollama
curl -fsSL https://ollama.ai/install.sh | sh# Backup OpenWebUI data
docker cp openwebui:/app/backend/data ./backup-$(date +%Y%m%d)
# Backup docker-compose configuration
cp /opt/openwebui/docker-compose.yml ./backup-compose-$(date +%Y%m%d).yml# Check for suspicious activity
cd /opt/openwebui
docker compose logs --tail=100
# Setup log rotation
sudo nano /etc/docker/daemon.jsonAdd:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}You can add custom models or external APIs:
-
OpenAI API
- Go to Settings → Connections
- Add OpenAI API key
- Select GPT models
-
Custom Ollama Models
- Create Modelfile
- Import into Ollama
- Available in OpenWebUI
-
Remote Ollama Instance
- Update
OLLAMA_API_BASE_URLin docker-compose.yml - Point to remote server
- Ensure network connectivity
- Update
For high-traffic scenarios:
-
Multiple Instances
# Use load balancer (nginx) to distribute traffic # Run multiple OpenWebUI containers on different ports
-
Database External
- Move to PostgreSQL or MySQL
- Update connection in docker-compose.yml
-
Resource Limits
# Add to docker-compose.yml deploy: resources: limits: cpus: '2' memory: 4G
Webhook Integration:
- Configure webhooks in OpenWebUI settings
- Integrate with Slack, Discord, Teams
API Access:
- OpenWebUI provides REST API
- Documentation available at
/api/docs
Daily:
- Monitor logs for errors
- Check service status
Weekly:
- Review user activity
- Check disk space
- Backup important data
Monthly:
- Update Docker images
- Update system packages
- Review and rotate logs
- Security audit
Create a monitoring script:
#!/bin/bash
# /opt/scripts/monitor-openwebui.sh
cd /opt/openwebui
# Check if container is running
if ! docker compose ps | grep -q "Up"; then
echo "OpenWebUI is down! Restarting..."
docker compose up -d
# Send alert (email, slack, etc.)
fi
# Check disk space
DISK_USAGE=$(df -h /opt | tail -1 | awk '{print $5}' | sed 's/%//')
if [ $DISK_USAGE -gt 80 ]; then
echo "Warning: Disk usage is ${DISK_USAGE}%"
# Send alert
fiAdd to crontab:
crontab -e
# Add: */5 * * * * /opt/scripts/monitor-openwebui.sh- OpenWebUI GitHub: https://github.com/open-webui/open-webui
- OpenWebUI Documentation: https://docs.openwebui.com
- Ollama Website: https://ollama.ai
- Ollama GitHub: https://github.com/ollama/ollama
- Ollama Models: https://ollama.ai/library
- Discord: Join OpenWebUI community
- GitHub Issues: Report bugs and feature requests
- Discussions: Ask questions and share tips
- LLM Basics: Understanding large language models
- Prompt Engineering: Crafting effective prompts
- Model Comparison: Choosing the right model
- Fine-tuning: Customizing models for specific tasks
Q: How much RAM do I need? A: Minimum 2GB for the interface, 4GB+ recommended. For models: Phi (2GB), Llama2 (8GB), Llama2:13b (16GB).
Q: Can I use OpenAI models? A: Yes! Add your OpenAI API key in settings to use GPT models.
Q: Is my data secure? A: When running locally with Ollama, everything stays on your server. No data leaves your infrastructure.
Q: Can multiple users use it simultaneously? A: Yes, OpenWebUI supports multiple concurrent users with separate accounts.
Q: How do I update OpenWebUI?
A: Run cd /opt/openwebui && docker compose pull && docker compose up -d
Q: Can I use it without internet? A: Yes, once Ollama and models are installed, it works completely offline.
Q: What's the difference between OpenWebUI and ChatGPT? A: OpenWebUI is self-hosted, private, and can use various models including local ones. ChatGPT is cloud-based.
Q: Can I customize the interface? A: Yes, OpenWebUI supports themes and various customization options in settings.
For issues related to:
AI-Agent-Platform Integration:
- Open an issue: https://github.com/wasalstor-web/AI-Agent-Platform/issues
- Check documentation: README.md
OpenWebUI Itself:
- OpenWebUI GitHub: https://github.com/open-webui/open-webui/issues
- OpenWebUI Docs: https://docs.openwebui.com
Ollama:
- Ollama GitHub: https://github.com/ollama/ollama/issues
- Ollama Docs: https://ollama.ai/docs
- Initial OpenWebUI integration
- Created setup script with bilingual support
- Added to smart-deploy menu
- Integrated with VPS connection checks
- Complete documentation
This integration guide is part of the AI-Agent-Platform project.
OpenWebUI is licensed under the MIT License. Ollama is licensed under the MIT License.
AI-Agent-Platform © 2025