A complete, fully automated, self-hosted AI development stack with Ollama, n8n, React frontend, and Node.js backend.
- π€ Self-hosted AI: Local LLM serving with Ollama (Phi-3 Mini, Llama2 7B)
- π Automated Workflow: n8n workflow automatically imported and activated
- π¬ Chat Interface: React-based frontend with real-time AI chat
- π REST API: Node.js backend with PostgreSQL integration
- π Example Integration: Stock price lookup demo
- π³ Fully Containerized: Complete Docker setup with one-command deployment
- π Secure by Default: Pre-configured authentication and credentials
- Docker and Docker Compose installed
- At least 8GB RAM (for AI models)
- 10GB free disk space
- Linux, macOS, or WSL2 (Windows)
This template works seamlessly across all major platforms:
| Platform | Status | Notes |
|---|---|---|
| Linux | β Fully Supported | Ubuntu 20.04+, Debian, Fedora, etc. |
| macOS | β Fully Supported | Intel and Apple Silicon (M1/M2/M3) |
| Windows WSL2 | β Fully Supported | Recommended for Windows users |
| Windows Git Bash | β Supported | Alternative to WSL2 |
Windows Users: The setup script automatically detects your environment (WSL2 or Git Bash) and applies necessary compatibility fixes. No manual configuration needed!
π‘ Not sure if you have everything? Run the requirements checker:
chmod +x check-requirements.sh
./check-requirements.shThis will verify:
- β Docker installation and status
- β Docker Compose availability
- β System RAM (8GB+ recommended)
- β Available disk space (10GB+ needed)
- β Port availability (3000, 3001, 5678, 11434)
- β Required tools (curl)
| Command | Result |
|---|---|
./quick-start.sh -f |
Full Setup. Automatically downloads both Phi-3 Mini (required) and Llama 2 7B (optional large model). |
./quick-start.sh -s |
Standard Setup. Automatically downloads only the required Phi-3 Mini model. |
./quick-start.sh |
Interactive. Prompts the user to choose Llama 2 download. |
chmod +x quick-start.sh
# Example recommended command:
./quick-start.sh -sThat's it! The script will:
- β
Automatically runs
check-requirements.shto verify your system meets all prerequisites - β If any requirements are missing, the script will provide installation instructions and exit safely
- β Generate all project files and directories
- β Build and start all Docker containers
- β Wait for services to be healthy
- β Automatically import and activate the n8n workflow
- β Download AI models (Phi-3 Mini required, Llama2 optional)
- β Verify everything is working
Total setup time: 5-10 minutes (depending on internet speed for model downloads)
The automated credentials (admin@example.com / ChangeMe!1) are not used for the first login.
Recommendation: Use your real email address and create a unique, strong password for the Owner account when prompted at http://localhost:5678. The AI Chat Interface will operate immediately, even if the Owner account is not set up.
If you prefer step-by-step control:
# 1. Generate project structure
chmod +x install.sh
./install.sh
# 2. Start all services
docker-compose up -d --build
# 3. Wait for services (2-3 minutes)
sleep 120
# 4. Setup AI models (interactive)
./scripts/setup-ollama.shNote: Manual setup requires manual n8n workflow import. See Troubleshooting section.
Once deployment is complete:
| Service | URL | Credentials |
|---|---|---|
| π¬ Chat Interface | http://localhost:3000 | None (open access) |
| βοΈ n8n Workflows | http://localhost:5678 | Setup required on first login |
| π Backend API | http://localhost:3001 | None (API endpoints) |
| π€ Ollama API | http://localhost:11434 | None (local access) |
- Multi-model Support: Switch between Phi-3 Mini and Llama2 7B
- Real-time Responses: Stream-like experience via n8n webhooks
- Conversation History: All chats stored in PostgreSQL
- Model Performance Indicators: See which model was used for each response
- Mobile Responsive: Works on desktop, tablet, and mobile devices
- Mock API Integration: Example of external data integration
- Historical Data: Track price changes over time
- Database Storage: All queries stored for analysis
- Real-time Updates: Fetch latest prices on demand
- Pre-configured Workflow: Chat workflow automatically deployed
- Production Webhooks: Activated and ready to receive requests
- Extensible: Easy to add new workflows and integrations
- Visual Editor: Drag-and-drop workflow creation
After running install.sh, the following structure is created:
ai-stack-template/
βββ π .gitignore # Git ignore rules (keeps repo clean)
βββ π .env # Environment variables (auto-generated)
βββ π .env.template # Template for custom configurations
βββ π docker-compose.yml # Service orchestration
βββ π README.md # This file
βββ π package.json # Project metadata & npm scripts
βββ π n8n-workflow-chat.json # Exportable n8n workflow
βββ π install.sh # Project structure generator βοΈ
βββ π quick-start.sh # One-command deployment π
βββ π manage.sh # Stack management utility π οΈ
β
βββ π backend/ # Node.js Express API (generated)
β βββ package.json # Backend dependencies
β βββ server.js # Main API server
β βββ Dockerfile # Backend container config
β βββ init.sql # Database schema & seed data
β
βββ π frontend/ # React Application (generated)
β βββ package.json # Frontend dependencies
β βββ Dockerfile # Multi-stage build config
β βββ nginx.conf # Production web server config
β βββ public/index.html # HTML template
β βββ src/
β βββ App.js # Main React component
β βββ App.css # Application styles
β βββ index.js # React entry point
β
βββ π n8n/workflows/ # n8n Workflows (generated)
β βββ chat-workflow.json # Pre-configured chat workflow
β
βββ π scripts/ # Utility Scripts (generated)
βββ setup-ollama.sh # Interactive AI model downloader
π Files NOT committed to Git:
backend/,frontend/,n8n/,scripts/- Generated by install.sh.env- Contains secrets- Docker volumes, logs, and data
Use the management script for common operations:
# Service Management
./manage.sh start # Start all services
./manage.sh stop # Stop all services
./manage.sh restart # Restart all services
./manage.sh status # Check service status
# Monitoring
./manage.sh logs # View all logs
./manage.sh logs n8n # View specific service logs
./manage.sh health # Run health checks
# AI Models
./manage.sh models list # List installed models
./manage.sh models pull phi3 # Download a specific model
./manage.sh models run phi3 # Test a model
# Data Management
./manage.sh backup # Create backup
./manage.sh restore # Restore from backup
./manage.sh clean # Remove containers (keep data)
./manage.sh reset # Nuclear option - delete everything
# Access Service Shells
./manage.sh shell ollama # Access Ollama container
./manage.sh shell n8n # Access n8n container
./manage.sh shell backend # Access backend container
./manage.sh shell postgres # Access PostgreSQLn8n Dashboard:
- Email: Configured at first login
- Password: Configured at first login
- NOTE: May skip all requests for additional information and license key is NOT required for community edition features
PostgreSQL Databases:
- n8n DB:
n8n/n8n_password - Backend DB:
backend_user/backend_password
.env file after running install.sh.
The .env file (auto-generated) contains:
# n8n Configuration
N8N_BASIC_AUTH_ACTIVE=true
N8N_USER=admin@example.com
N8N_PASS=ChangeMe!1
N8N_EXTERNAL_API_USERS_ALLOW_BASIC_AUTH=true
# Database URLs
DATABASE_URL=postgresql://backend_user:backend_password@postgres_backend:5432/ai_app
# API Endpoints
N8N_WEBHOOK_URL=http://n8n:5678/webhook
REACT_APP_API_URL=http://localhost:3001
# Add your custom variables hereTo customize:
- Copy
.env.templateto.env(done automatically) - Edit values as needed
- Restart services:
docker-compose down && docker-compose up -d
User (Browser)
β
Frontend (React:3000)
β
Backend API (Node.js:3001)
β
n8n Webhook (n8n:5678/webhook/chat)
β
Ollama API (Ollama:11434/api/generate)
β
AI Model Response
β
Backend β Frontend β User
The pre-configured workflow consists of 3 nodes:
-
Webhook Node
- Path:
/webhook/chat - Method: POST
- Response Mode: Using 'Respond to Webhook' Node
- Path:
-
HTTP Request Node (Ollama)
- URL:
http://ollama:11434/api/generate - Body: JSON with model, prompt, stream settings
- Processes user message through AI model
- URL:
-
Respond to Webhook Node
- Returns formatted JSON response
- Maps: response, model, chatId
π₯ Fully Automated: The workflow is automatically imported, activated, and verified during deployment!
# Method 1: Using manage script
./manage.sh models pull mistral:7b-instruct
# Method 2: Direct Ollama command
docker exec ollama ollama pull codellama:7b-instruct
# Method 3: During setup
# The setup-ollama.sh script offers interactive model selectionAvailable models: https://ollama.com/library
After adding models:
- Update
frontend/src/App.jsmodel selector - Rebuild frontend:
docker-compose up -d --build frontend
- Access n8n: http://localhost:5678
- Login with admin credentials
- Create new workflow or duplicate existing
- Add nodes (400+ integrations available)
- Set webhook path (e.g.,
/webhook/your-custom-path) - Activate workflow
- Update backend to call new webhook
Edit backend/server.js:
// Add new endpoint
app.post('/api/your-endpoint', async (req, res) => {
const { data } = req.body;
// Your logic here
// Call n8n webhook if needed
const n8nResponse = await axios.post(
`${process.env.N8N_WEBHOOK_URL}/your-path`,
{ data }
);
res.json(n8nResponse.data);
});Rebuild: docker-compose up -d --build backend
Table: chat_sessions
id UUID PRIMARY KEY
user_message TEXT NOT NULL
ai_response TEXT
model_used VARCHAR(100)
created_at TIMESTAMP
updated_at TIMESTAMPTable: stock_prices (Example)
id SERIAL PRIMARY KEY
symbol VARCHAR(10) NOT NULL
price DECIMAL(10,2)
change_amount DECIMAL(10,2)
recorded_at TIMESTAMPAccess database:
./manage.sh shell postgres-backend
# Then: SELECT * FROM chat_sessions ORDER BY created_at DESC LIMIT 10;- β Basic authentication enabled for n8n
- β Local network only (no external access)
- β
Credentials in
.env(gitignored) β οΈ Mock data used (stock API example)
- Change all default passwords
- Enable HTTPS/SSL certificates
- Set up reverse proxy (Nginx, Traefik)
- Implement rate limiting
- Use secrets management (Docker secrets, Vault)
- Enable n8n API key authentication
- Configure firewall rules
- Set up monitoring and alerting
- Regular security updates
- Database backups (automated)
Edit .env:
# Use strong, unique passwords
N8N_PASS=<generate-strong-password>
POSTGRES_PASSWORD=<generate-strong-password>
# Enable encryption
N8N_ENCRYPTION_KEY=<generate-32-char-key>
# Restrict access
CORS_ORIGIN=https://yourdomain.com# Check Docker status
docker info
# Check disk space
docker system df
# Check service logs
docker-compose logs
# Check specific service
docker-compose logs backendSymptoms: Chat returns "No response received"
Solutions:
-
Verify workflow is active:
# Check n8n logs for webhook registration docker-compose logs n8n | grep -i webhook
-
Manual workflow activation:
- Go to http://localhost:5678
- Open "AI Chat Workflow"
- Toggle Active switch (top-right) to ON
- Should turn green/blue
-
Test webhook directly:
curl -X POST http://localhost:5678/webhook/chat \ -H "Content-Type: application/json" \ -d '{"message": "test", "model": "phi3:mini", "chatId": "test-123"}'
-
Re-import workflow:
- Delete existing workflow in n8n
- Import
n8n-workflow-chat.jsonfrom project root - Activate it
# Check if Ollama is running
docker-compose logs ollama
# List available models
docker exec ollama ollama list
# Re-download models
./scripts/setup-ollama.sh
# Test model directly
docker exec ollama ollama run phi3:mini "Hello"# Check if backend is running
curl http://localhost:3001/health
# Check backend logs
docker-compose logs backend
# Verify network
docker network inspect ai-stack-template_ai_network
# Restart services
docker-compose restart backend frontend# Check PostgreSQL status
docker exec postgres pg_isready -U n8n
# Check backend database
docker exec postgres_backend pg_isready -U backend_user
# Reset databases (β οΈ DELETES DATA)
docker-compose down -v
docker-compose up -d# Check what's using the port
sudo lsof -i :3000 # or :5678, :3001, :11434
# Stop conflicting service
sudo systemctl stop <service-name>
# Or change port in docker-compose.yml
# Example: "3001:3001" β "3002:3001"# Stop and remove everything
docker-compose down -v
# Remove generated files
rm -rf backend/ frontend/ n8n/ scripts/
# Start fresh
./quick-start.sh# Allocate more RAM to Docker
# Docker Desktop: Settings β Resources β Memory β 8GB+
# Use faster storage
# Move Docker data to SSD if on HDD
# Reduce logging
# Edit docker-compose.yml:
# logging:
# driver: "json-file"
# options:
# max-size: "10m"
# max-file: "3"- Use managed PostgreSQL (AWS RDS, Google Cloud SQL)
- Implement Redis for session storage
- Enable Docker BuildKit for faster builds
- Use horizontal scaling for backend/frontend
- Set up CDN for frontend assets
- Implement caching strategies
# Health check
curl http://localhost:3001/health
# Chat endpoint
curl -X POST http://localhost:3001/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello!", "model": "phi3:mini"}'
# Stock endpoint
curl http://localhost:3001/api/stock/AAPLcurl -X POST http://localhost:5678/webhook/chat \
-H "Content-Type: application/json" \
-d '{"message": "Test", "model": "phi3:mini", "chatId": "test-123"}'curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{"model": "phi3:mini", "prompt": "Hello", "stream": false}'| Model | Size | Speed | Quality | Use Case |
|---|---|---|---|---|
| Phi-3 Mini | 2.3GB | Fast β‘ | Good β | General chat, quick responses |
| Llama2 7B | 3.8GB | Medium | Better ββ | Complex queries, detailed answers |
| Mistral 7B | 4.1GB | Medium | Better ββ | Balanced performance |
| CodeLlama 7B | 3.8GB | Medium | Excellent βββ | Code generation |
This is a template project designed for customization. Feel free to:
- Fork and modify for your needs
- Share improvements and extensions
- Report issues or suggest features
- Create custom workflows and integrations
Not actively accepting PRs as this is a personal template, but ideas and feedback are welcome!
This template is provided as-is for development and learning purposes.
Third-party components:
- n8n: Fair-code license
- Ollama: MIT License
- React: MIT License
- Express: MIT License
- PostgreSQL: PostgreSQL License
- Start with the Quick Start guide above
- Explore the frontend code in
frontend/src/App.js - Check backend API in
backend/server.js - Play with n8n workflows at http://localhost:5678
- Add new endpoints to the backend
- Create custom n8n workflows
- Integrate external APIs (weather, news, etc.)
- Implement user authentication
- Set up production deployment
- Implement RAG (Retrieval-Augmented Generation)
- Create multi-agent workflows
- Add model fine-tuning capabilities
For issues with this template:
- Check the Troubleshooting section
- Review logs:
docker-compose logs - Ensure prerequisites are met
For component-specific issues:
- n8n: https://community.n8n.io/
- Ollama: https://github.com/ollama/ollama/issues
- React: https://react.dev/community
- β Fully automated n8n workflow import and activation
- β
One-command deployment via
quick-start.sh - β Pre-configured authentication for n8n
- β Health checks and wait logic for reliable startup
- β Interactive model selection during setup
- β Comprehensive management script for operations
- β Production-ready Dockerfiles with proper dependencies
- Docker Hub images for faster deployment
- Kubernetes deployment manifests
- Terraform/Pulumi infrastructure as code
- CI/CD pipeline examples
- Additional pre-built workflows
- RAG implementation example
- User authentication system
π Ready to build amazing AI applications!
Questions? Check the docs above or dive into the code. Happy coding! π