Version: 2.0 | Status: Production Ready ✅ | License: MIT
A sophisticated, local-first AI cognitive partner designed to enhance human thinking through Socratic questioning, pattern recognition, and serendipitous insights. Built with enterprise-grade performance optimization, comprehensive testing, and adaptive intelligence.
Synapse transcends traditional AI assistants by functioning as a true cognitive partner that:
- 🎯 Facilitates Self-Discovery: Uses advanced Socratic questioning to guide you toward your own insights
- 🔗 Discovers Hidden Connections: Reveals unexpected relationships in your thoughts and conversations
- 📊 Visualizes Thinking Patterns: Provides interactive charts showing your cognitive landscape
- 🏠 Operates Completely Locally: Zero cloud dependency - your thoughts stay private
- ⚡ Adapts to Your Hardware: Automatically optimizes for everything from low-end to high-performance systems
- 🧪 Production-Tested: Comprehensive test suite with 95%+ coverage across all components
- Dynamic Socratic Method: AI adapts questioning style based on your cognitive patterns
- Intellectual Rigor: Challenges assumptions while maintaining supportive dialogue
- Contextual Memory System: Sophisticated conversation history with pattern recognition
- Adaptive Learning: Continuously improves questioning based on your responses
- Pattern Discovery: Identifies unexpected connections across your conversations
- Insight Generation: Surfaces hidden relationships in your thinking
- Connection Visualization: Interactive charts showing thought relationships
- Thematic Analysis: Discovers recurring themes and cognitive patterns
- Automatic System Detection: Real-time hardware capability assessment
- Adaptive Timeouts: Dynamic response timing based on system performance
- Intelligent Caching: Multi-layer caching for instant responses
- Progressive Loading: Meaningful feedback during AI processing
- Stress-Tested: Handles concurrent users and large conversation datasets
- 100% Local Processing: No data ever leaves your machine
- Zero Telemetry: No analytics, tracking, or data collection
- Secure File Handling: Restricted file access with validation
- Input Sanitization: Comprehensive security against malicious inputs
- Offline Capable: Full functionality without internet connection
- Glassmorphism Design: Beautiful, translucent HUD-style interface
- Real-time Streaming: Live response generation with typewriter effects
- Responsive Design: Optimized for desktop, tablet, and mobile devices
- Accessibility Compliant: WCAG 2.1 AA standards for inclusive design
- Cross-Browser Support: Works seamlessly across all modern browsers
- Operating System: Linux, macOS, or Windows
- Python: 3.8+ (Python 3.9+ recommended) OR Docker
- RAM: 4GB minimum (8GB+ for optimal performance)
- Storage: 2GB free space for models and caching
- GPU: Optional - NVIDIA GPU for accelerated inference
Perfect for: Quick deployment, isolated environments, consistent setup across systems
-
Prerequisites: Install Docker and Docker Compose
-
Clone the repository:
git clone https://github.com/NexionisJake/Synapse.git cd Synapse -
Configure environment:
cp .env.example .env # Edit .env and set OLLAMA_HOST=http://host.docker.internal:11434 -
Start with Docker Compose:
docker-compose up --build -d
-
Access the application at
http://localhost:5000
Note: Ensure Ollama is running on your host machine. For detailed Docker setup, see DEPLOYMENT.md.
-
Clone and navigate:
git clone https://github.com/NexionisJake/Synapse.git cd Synapse -
Run system optimization (detects your hardware and configures optimal settings):
python detect_system_performance.py
-
Start Synapse using the automated startup script:
# Linux/macOS ./start_synapse.sh # Windows start_synapse.bat
-
Access the application at
http://localhost:5000
For custom setups or development:
-
Create virtual environment:
python3 -m venv venv source venv/bin/activate # Linux/macOS # or venv\Scripts\activate # Windows
-
Install dependencies:
pip install -r requirements.txt
-
Install and start Ollama:
# Install Ollama (see https://ollama.ai) ollama serve ollama pull llama3:8b # or your preferred model
-
Configure environment:
cp .env.example .env # Edit .env with your preferences -
Start the application:
python app.py
Synapse automatically detects and optimizes for your system configuration:
| Performance Tier | Hardware Specs | Response Times | Experience Quality |
|---|---|---|---|
| High Performance | 8+ cores, 16+ GB RAM, SSD | 15-45 seconds | Real-time conversation flow |
| Standard | 4-8 cores, 8-16 GB RAM | 30-90 seconds | Smooth interaction |
| Efficient | 2-4 cores, 4-8 GB RAM | 60-180 seconds | Patient, guided experience |
| Conservative | ≤2 cores, ≤4 GB RAM | 120-300 seconds | Thoughtful, deliberate pace |
- Dynamic Timeout Adjustment: Automatically extends processing time for complex requests
- Resource Monitoring: Real-time system performance tracking
- Graceful Degradation: Maintains functionality across all hardware configurations
- Cache Optimization: Intelligent caching reduces repeated processing time by 80%+
For fine-tuning performance, see our comprehensive guides:
- 📖 PERFORMANCE_SETUP.md - Detailed optimization guide
- 🚀 DEPLOYMENT.md - Production deployment options
- 🔧 SERENDIPITY_DEPLOYMENT.md - Advanced feature configuration
Synapse works best when you share your authentic thoughts and uncertainties:
✅ Effective Approaches:
- "I'm considering a career change but feel torn between security and passion..."
- "I have this business idea, but something doesn't feel quite right about it..."
- "I keep putting off this important decision, and I'm not sure why..."
- "I notice I react strongly when people mention X, but I can't articulate why..."
❌ Less Effective:
- "What should I do about my job?" (too direct)
- "Give me advice on relationships" (seeking answers rather than exploration)
- "What's the best way to..." (looking for solutions rather than understanding)
The Synapse Experience:
- Thoughtful Questions: Instead of direct answers, expect questions that illuminate different angles
- Pattern Recognition: Synapse identifies recurring themes and cognitive patterns
- Gentle Challenges: Respectful probing of assumptions and blind spots
- Serendipitous Insights: Unexpected connections between seemingly unrelated topics
- Visual Understanding: Interactive charts showing your thinking landscape
You: "I want to start a business but don't know what kind."
Synapse: "What draws you to entrepreneurship specifically? Is it the independence, the creative control, solving a particular problem you've noticed, or something else entirely?"
You: "I think it's the freedom to create something meaningful..."
Synapse: "When you say 'meaningful' - can you think of a time recently when you felt that sense of meaning? What were you doing, and what made it feel significant?"
- Memory Continuity: Synapse remembers your conversations and builds understanding over time
- Serendipity Engine: Click "Discover Connections" to reveal hidden patterns in your thinking
- Cognitive Charts: Visual representations of your thought patterns and themes
- Export Insights: Save important discoveries and patterns for future reflection
Synapse is built with a sophisticated, modular architecture designed for scalability and maintainability:
synapse/
├── 🎯 Frontend Layer
│ ├── static/css/style.css # Glassmorphism UI components
│ ├── static/js/chat.js # Real-time chat interface
│ ├── static/js/streaming-performance-monitor.js # Performance tracking
│ └── static/js/cognitive-charts.js # Interactive visualization
│
├── 🧠 Backend Services
│ ├── app.py # Flask application & routing
│ ├── ai_service.py # AI communication layer
│ ├── memory_service.py # Conversation persistence
│ ├── prompt_service.py # Dynamic prompt management
│ ├── serendipity_service.py # Pattern recognition engine
│ └── performance_optimizer.py # System optimization
│
├── ⚙️ Configuration & Utilities
│ ├── config.py # Environment-aware configuration
│ ├── detect_system_performance.py # Hardware detection
│ ├── error_handler.py # Comprehensive error handling
│ └── security.py # Input validation & security
│
├── 🧪 Quality Assurance
│ ├── test_*.py # Comprehensive test suite (95%+ coverage)
│ ├── run_comprehensive_test_suite.py # Automated testing
│ └── performance_benchmarks.py # Performance validation
│
└── 📚 Documentation
├── README.md # This comprehensive guide
├── SETUP.md # Installation instructions
├── DEPLOYMENT.md # Production deployment
├── SECURITY.md # Security implementation
└── PERFORMANCE_SETUP.md # Optimization guide
Backend:
- Python 3.8+ with Flask framework
- Ollama Integration for local AI processing
- Asyncio for concurrent request handling
- Pydantic for data validation
- Psutil for system monitoring
Frontend:
- Modern JavaScript (ES6+) with async/await
- CSS3 with Glassmorphism effects
- Chart.js for cognitive visualization
- Responsive Design principles
AI & Machine Learning:
- Local LLaMA Models (3B to 70B parameter support)
- Ollama for model management and inference
- Custom Prompt Engineering for Socratic questioning
- Pattern Recognition algorithms for serendipity detection
Development & Testing:
- Pytest framework with 95%+ test coverage
- Unit, Integration, and End-to-End testing
- Performance Benchmarking with automated validation
- Security Testing including input validation and OWASP compliance
Synapse supports extensive configuration through environment variables:
| Variable | Default | Description |
|---|---|---|
FLASK_ENV |
development |
Environment mode (development/production/testing) |
OLLAMA_MODEL |
llama3:8b |
AI model to use for conversations |
SECRET_KEY |
Auto-generated | Flask session security key |
LOG_LEVEL |
INFO |
Logging verbosity (DEBUG/INFO/WARNING/ERROR) |
| Variable | Default | Description |
|---|---|---|
STREAMING_TIMEOUT |
180 |
Maximum time for streaming responses (seconds) |
RESPONSE_TIMEOUT |
60 |
Standard response timeout (seconds) |
OLLAMA_TIMEOUT |
30 |
AI service connection timeout (seconds) |
MAX_CONVERSATION_LENGTH |
100 |
Maximum conversation history entries |
| Variable | Default | Description |
|---|---|---|
ENABLE_SERENDIPITY_ENGINE |
True |
Enable pattern discovery features |
SERENDIPITY_MIN_INSIGHTS |
3 |
Minimum insights required for analysis |
SERENDIPITY_MAX_MEMORY_SIZE_MB |
10 |
Maximum memory size for analysis |
SERENDIPITY_ANALYSIS_TIMEOUT |
120 |
Analysis processing timeout |
| Variable | Default | Description |
|---|---|---|
SANITIZE_ERRORS |
False |
Enable error message sanitization in production |
ENABLE_PERFORMANCE_MONITORING |
True |
System performance tracking |
MAX_FILE_SIZE_MB |
5 |
Maximum upload file size |
Development Mode:
export FLASK_ENV=development
export LOG_LEVEL=DEBUG
export MAX_CONVERSATION_LENGTH=50Production Mode:
export FLASK_ENV=production
export SECRET_KEY=your-secure-secret-key
export LOG_LEVEL=INFO
export SANITIZE_ERRORS=True
export MAX_CONVERSATION_LENGTH=200Testing Mode:
export FLASK_ENV=testing
export MEMORY_FILE=test_memory.json
export STREAMING_TIMEOUT=30Synapse maintains enterprise-grade quality through comprehensive testing:
- Unit Tests: 95%+ coverage of individual components
- Integration Tests: 85%+ coverage of component interactions
- End-to-End Tests: 80%+ coverage of complete user workflows
- Performance Tests: Stress testing with various data sizes
- Security Tests: Input validation and vulnerability assessment
- Browser Compatibility: Automated testing across major browsers
# Run all tests
python run_comprehensive_test_suite.py
# Quick test suite
python run_tests_simple.py
# Specific test categories
pytest test_ai_service.py
pytest test_memory_service.py
pytest test_serendipity_*.py# System performance validation
python performance_benchmarks.py
# Stress testing with concurrent users
pytest test_*_performance.py -v- Chrome/Chromium ✅ Full support
- Firefox 70+ ✅ Full support
- Safari 13+ ✅ Full support
- Edge 80+ ✅ Full support
- Mobile Browsers ✅ Responsive design tested
- WCAG 2.1 AA standards compliance
- Screen reader compatibility
- Keyboard navigation support
- High contrast mode support
Test Results Summary:
- ✅ Integration Tests: All 47 tests passing
- ✅ Performance Tests: Meeting all benchmark thresholds
- ✅ Security Tests: Zero vulnerabilities detected
- ✅ Browser Tests: 100% compatibility across target browsers
- ✅ Accessibility Tests: WCAG 2.1 AA compliant
Performance Benchmarks:
- Small datasets (≤10 insights): < 2 seconds ✅
- Medium datasets (≤100 insights): < 10 seconds ✅
- Large datasets (≤500 insights): < 30 seconds ✅
- Concurrent requests (5 users): < 60 seconds ✅
- Zero Data Collection: No telemetry, analytics, or user tracking
- Local Processing Only: All AI inference happens on your machine
- No Network Dependencies: Fully functional offline after initial setup
- Conversation Privacy: All conversations stored locally with your control
- Input Validation: Comprehensive sanitization of all user inputs
- File Access Controls: Restricted file system access to project directory only
- Error Handling: Production-safe error messages (sensitive info sanitized)
- Session Security: Secure session management with configurable keys
- OWASP Compliance: Following OWASP security guidelines
All security measures are validated through:
- Automated Security Scans: Regular vulnerability assessments
- Input Validation Tests: Comprehensive testing of edge cases
- File Access Tests: Verification of directory restrictions
- Error Message Sanitization: Ensuring no sensitive data leakage
Slow Responses or Timeouts:
# 1. Check system performance
python detect_system_performance.py
# 2. Optimize configuration
python config.py # Validates current settings
# 3. Monitor real-time performance
curl http://localhost:5000/api/performance/statusHigh Memory Usage:
# Clear conversation history
curl -X POST http://localhost:5000/api/memory/cleanup
# Check memory statistics
curl http://localhost:5000/api/memory/statsOllama Connection Problems:
# Verify Ollama is running
ollama list
# Check model availability
ollama pull llama3:8b
# Restart Ollama service
ollama serveApplication Won't Start:
# Check dependencies
pip install -r requirements.txt
# Verify Python version
python --version # Should be 3.8+
# Check for port conflicts
lsof -i :5000 # Linux/macOS
netstat -an | findstr :5000 # WindowsReset Application State:
# Backup current data
cp memory.json memory.json.backup
# Clear error logs
> synapse_errors.log
# Reset to defaults
python detect_system_performance.py --resetComplete Reinstallation:
# Remove virtual environment
rm -rf venv/
# Reinstall from scratch
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txtFor additional support, see:
- 📖 SETUP.md - Detailed installation guide
- 🚀 DEPLOYMENT.md - Production deployment
- ⚡ PERFORMANCE_SETUP.md - Optimization guide
Synapse has passed comprehensive testing and is ready for production deployment:
1. Local Production (Recommended)
# Set production environment
export FLASK_ENV=production
export SECRET_KEY=$(python -c 'import secrets; print(secrets.token_hex(32))')
# Use production WSGI server
pip install gunicorn
gunicorn -w 4 -b 0.0.0.0:5000 app:app2. Docker Deployment
# Quick start with Docker Compose
docker-compose up --build -d
# View logs
docker-compose logs -f
# Stop containers
docker-compose downFor detailed Docker setup including Ollama integration, health checks, and volume management, see DEPLOYMENT.md.
3. Reverse Proxy Setup (nginx)
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}Health Checks:
# Application status
curl http://localhost:5000/api/status
# Performance metrics
curl http://localhost:5000/api/performance/status
# Memory usage
curl http://localhost:5000/api/memory/statsLog Monitoring:
synapse_errors.log- Application errors and warningsollama.log- AI service logs- System performance metrics via built-in dashboard
We welcome contributions from the community! Whether you're fixing bugs, adding features, improving documentation, or sharing ideas, your input is valuable.
Option 1: Docker Development (Recommended)
# Clone repository
git clone https://github.com/NexionisJake/Synapse.git
cd Synapse
# Build and run with Docker
docker-compose up --build
# Run tests in container
docker-compose exec synapse pytestOption 2: Local Development
# Clone repository
git clone https://github.com/NexionisJake/Synapse.git
cd Synapse
# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux/macOS
# or venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
pip install pytest pytest-cov pytest-mock coverage
# Run tests
python run_comprehensive_test_suite.py- Fork the repository to your GitHub account
- Create a feature branch with a descriptive name:
git checkout -b feature/amazing-feature # or git checkout -b fix/bug-description # or git checkout -b docs/documentation-improvement
- Make your changes following our code standards (see below)
- Test thoroughly:
- Maintain 95%+ test coverage for new code
- Run the full test suite:
python run_comprehensive_test_suite.py - Test Docker builds if modifying Docker files:
docker-compose build
- Document your changes:
- Update relevant
.mdfiles (README, SETUP, DEPLOYMENT, etc.) - Add inline code comments for complex logic
- Update docstrings for new functions/classes
- Update relevant
- Commit with clear, descriptive messages:
git commit -m "Add feature: Brief description of what you added" git commit -m "Fix: Brief description of what you fixed" git commit -m "Docs: Brief description of documentation changes"
- Push to your fork:
git push origin feature/amazing-feature
- Submit a Pull Request with:
- Clear title describing the change
- Detailed description of what changed and why
- References to related issues (if applicable)
- Screenshots/examples for UI changes
- Bug Fixes: Found a bug? We'd love a fix!
- Features: New cognitive capabilities, UI improvements, performance optimizations
- Documentation: Tutorials, examples, improved explanations
- Testing: Additional test cases, improved test coverage
- Docker & DevOps: Container optimization, CI/CD pipelines, deployment guides
- Accessibility: Improving WCAG compliance, screen reader support
- Internationalization: Translations, locale support
- Performance: Optimization, caching improvements, resource efficiency
- Security: Vulnerability fixes, security enhancements
Python Code:
- Follow PEP 8 style guidelines
- Use type hints where applicable
- Write docstrings for all public functions/classes
- Maximum line length: 100 characters
- Use meaningful variable and function names
JavaScript Code:
- Use ES6+ features (async/await, arrow functions, etc.)
- Consistent indentation (2 spaces)
- Use
constandlet(avoidvar) - Add JSDoc comments for complex functions
Testing:
- Maintain 95%+ test coverage for new code
- Write unit tests for individual functions
- Write integration tests for component interactions
- Include edge cases and error handling tests
Documentation:
- Update relevant
.mdfiles with your changes - Use clear, concise language
- Include code examples where helpful
- Add screenshots for UI changes
Docker:
- Keep images lean (use multi-stage builds if needed)
- Document all environment variables in
.env.example - Test builds on multiple platforms (Linux, macOS, Windows)
- Update
DEPLOYMENT.mdwith Docker-related changes
Security:
- All user inputs must be validated and sanitized
- No hardcoded secrets or credentials
- Follow OWASP security guidelines
- Report security issues privately (see SECURITY.md)
Found a bug? Please open an issue with:
- Clear, descriptive title
- Steps to reproduce the bug
- Expected vs. actual behavior
- Your environment (OS, Python version, Docker version, etc.)
- Screenshots or error messages (if applicable)
- Any relevant logs from
synapse_errors.log
Have an idea? We'd love to hear it! Open an issue with:
- Clear description of the feature
- Use case: What problem does it solve?
- Potential implementation approach (if you have ideas)
- Any related features or examples from other projects
All contributors will be:
- Listed in our contributors section
- Credited in release notes for significant contributions
- Appreciated for making Synapse better for everyone!
Not sure about something? Have questions before contributing?
- Open a discussion on GitHub
- Check existing issues and pull requests
- Review our documentation files
We're here to help make your contribution experience smooth and rewarding!
- Total Lines of Code: ~15,000+ (Python, JavaScript, CSS, HTML)
- Test Coverage: 95%+ across all components
- Documentation: Comprehensive guides (8 detailed .md files)
- Performance: Supports systems from 2GB to 32GB+ RAM
- Browser Support: 100% compatibility with modern browsers
- ✅ Core Cognitive Partnership: Advanced Socratic questioning
- ✅ Serendipity Engine: Pattern recognition and insight discovery
- ✅ Performance Optimization: Automatic system adaptation
- ✅ Security Implementation: Comprehensive input validation
- ✅ Modern UI: Glassmorphism design with accessibility
- ✅ Production Ready: Enterprise-grade testing and deployment
This project is proprietary software. All rights reserved to the author.
Technology Stack:
- Ollama - Local AI model management and inference
- Flask - Lightweight web framework
- Chart.js - Beautiful data visualization
- LLaMA Models - Advanced language models
Methodological Inspiration:
- Socratic Method - Ancient Greek philosophical questioning technique
- Cognitive Behavioral Therapy - Pattern recognition in thinking
- Design Thinking - Human-centered problem-solving approach
Privacy Philosophy:
- Local-First Software movement - User data sovereignty
- Digital Minimalism - Intentional technology use
- Calm Technology - Non-intrusive, helpful computing