A 100% offline meeting prep assistant that turns your console history into team/manager-ready updates.
InnerBoard-local analyzes your terminal sessions and produces concise, professional talking points for your next standup or 1:1. Everything runs locally on your machineβno data ever leaves your device.
- Record Console Sessions: Capture interactive terminal activity with timing
- Extract Insights (SRE): Turn raw logs into structured successes, blockers, and resources
- Compose Meeting Prep (MAC): Generate team/manager updates and concrete recommendations
- 100% Local: Uses local LLMs (Ollama); no data leaves your device
- Modern CLI: Beautiful terminal interface with progress indicators and rich formatting
βββββββββββββββββββ process ββββββββββββββββββββ generate βββββββββββββββββββ
β Terminal Logs β βββββββββββββΆβ SRE Sessions β βββββββββββββΆ β MAC Meeting β
β (raw console) β β (structured β β Prep Output β
βββββββββββββββββββ β insights) β βββββββββββββββββββ
β ββββββββββββββββββββ β
β β β
βΌ βΌ βΌ
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Encrypted Vault ββββββββββββββββ Local Ollama βββββββββββββββΆβ Team Updates β
β (SQLite+Fernet) β β Model Processing β β Recommendations β
βββββββββββββββββββ β (localhost:11434)β β Manager Reports β
ββββββββββββββββββββ βββββββββββββββββββ
- PBKDF2 Key Derivation: 100,000 iterations for cryptographically secure key generation
- Fernet AES-128 Encryption: Industry-standard encryption for all stored data
- Password-Protected Keys: Optional password encryption of master keys with integrity validation
- Input Validation: Comprehensive protection against SQL injection, XSS, path traversal attacks
- Data Integrity: SHA256 checksums ensure data hasn't been tampered with
- Network Isolation: Only allows loopback connections to Ollama (ports 11434)
- Secure File Deletion: Overwrites sensitive files before deletion
- SRE (Structured Reflection Extraction): Advanced AI extracts structured insights
- Key successes identification with specifics and context
- Blocker identification with impact assessment and resolution hints
- Resource needs assessment and recommendations
- Session summaries with actionable details
- MAC (Meeting Advice Composer): Generates professional meeting prep content
- Team updates with progress highlights and next focus areas
- Manager updates with outcomes, risks, and resource needs
- Concrete recommendations with actionable next steps
- Multi-session synthesis for comprehensive reporting
- Rich Terminal Interface: Beautiful tables, progress bars, and color-coded output
- Comprehensive Help: Detailed command help and usage guidance
- Error Handling: User-friendly messages with actionable solutions
- Progress Indicators: Real-time feedback during AI processing
- Multi-format Display: Tables, panels, structured data, and formatted text
- Intelligent Caching: TTL-based caching (responses, models, reflections)
- Connection Pooling: Efficient Ollama client management (max 5 connections)
- SQLite Optimization: WAL mode, foreign keys, indexed queries
- GPU Acceleration: Leverages platform capabilities via Ollama
- Memory Management: Automatic cache cleanup and size limits
- Thread-Safe Operations: All caching and connections are thread-safe
- Environment Variables: Full configuration via env vars and .env files
- Auto-loading: .env files loaded automatically with python-dotenv
- Dynamic Model Switching: Easy switching between Ollama models
- Validation: Configuration validation with helpful error messages
- Runtime Updates: Configuration changes take effect immediately
Choose the installation method that works best for you:
Run one command per OS to install natively with a virtual environment and local Ollama:
macOS / Linux:
curl -fsSL "https://raw.githubusercontent.com/ramper-labs/InnerBoard-local/main/quickstart.sh" | bashWindows (PowerShell):
iwr "https://raw.githubusercontent.com/ramper-labs/InnerBoard-local/main/quickstart.ps1" -UseBasicParsing | iexThe script will:
- β Create/activate a Python virtual environment
- β Install InnerBoard-local
- β Ensure Ollama is installed and running locally
- β
Pull the default AI model (
gpt-oss:20b) - β Initialize your encrypted vault
- β Run health checks
During initialization you'll be prompted to set a password for your encrypted vault.
Optionally, the quickstart can save your password to .env (plaintext) for convenience; you can decline if you prefer not to store it.
That's it! You're ready to start using InnerBoard-local.
Prerequisites:
- Python 3.8+ (download here)
- Git (download here)
- Ollama (download here)
# 1. Clone the repository
git clone https://github.com/ramper-labs/InnerBoard-local.git
cd InnerBoard-local
# 2. Environment
python3 -m venv .venv
source .venv/bin/activate
# 3. Install InnerBoard
pip install -e .
# 4. Run automated setup
innerboard setup# 1. Clone the repository
git clone https://github.com/ramper-labs/InnerBoard-local.git
cd InnerBoard-local
# 2. Run Docker setup script
./docker-setup.shOr manually:
# Build and start services
docker-compose up -d
# Initialize InnerBoard (run once)
docker-compose exec innerboard innerboard initAfter installation, verify everything is working:
# If using virtual environment (created automatically on some systems):
source innerboard_venv/bin/activate
# Check system health
innerboard health
# View available commands
innerboard --helpNote: If the quickstart script created a virtual environment (innerboard_venv), you'll need to activate it each time you want to use InnerBoard-local:
# Activate virtual environment
source innerboard_venv/bin/activate
# Use InnerBoard normally
innerboard add "My reflection"
innerboard list
# Deactivate when done
deactivateOptional: Add this to your ~/.bashrc for convenience:
echo 'alias innerboard="source innerboard_venv/bin/activate && innerboard"' >> ~/.bashrc
source ~/.bashrcSystem Requirements:
- RAM: 8GB minimum, 16GB recommended
- Storage: 20GB free space for AI models
- OS: Linux, macOS, or Windows (WSL)
Use the built-in recorder to capture an interactive shell session. By default, writes are flushed frequently for near-real-time updates; you can disable with --no-flush.
# Start recording (type `exit` to finish)
innerboard record
# Useful options
# --dir PATH Save session under a custom directory
# --name NAME Filename (auto-generated if omitted)
# --shell PATH Shell to launch (defaults to $SHELL or /bin/bash)
# --flush/--no-flush Control write flushing (default: --flush)Session artifacts (raw log, timing, segments, SRE JSON) are saved under your app data directory, e.g. ~/.local/share/InnerBoard/sessions/ on Linux/WSL.
Aggregate all recorded sessions and produce crisp talking points.
# Generate concise meeting prep
innerboard prep
# Include detailed SRE insights (verbose)
innerboard prep --show-sre# One-time: create your encrypted vault for notes (password prompt)
innerboard init
# Optional: avoid prompts by exporting your password
export INNERBOARD_KEY_PASSWORD="your_secure_password_here"# Save a short private note alongside your sessions to remind your future self
innerboard add "Investigated auth token validation; planning staging tests next."- Record meaningful work sessions:
innerboard record - Optionally jot private notes to remind yourself:
innerboard add "Short note" - Generate prep before standup/1:1:
innerboard prep --show-sre
- Run
innerboard prepfor concise talking points - Use
--show-sreto include detailed context when needed
- Skim SRE session summaries to spot patterns
- Capture follow-ups as private notes with
innerboard add "..."
| Command | Description | Example |
|---|---|---|
curl -fsSL https://raw.githubusercontent.com/ramper-labs/InnerBoard-local/main/quickstart.sh | bash |
One-command installation | Quick start script |
innerboard setup |
Interactive setup wizard | innerboard setup --docker |
innerboard health |
Comprehensive health check | innerboard health --detailed |
./docker-setup.sh |
Docker deployment setup | Automated Docker setup |
| Command | Description | Example |
|---|---|---|
innerboard init |
Initialize encrypted vault | innerboard init |
innerboard add "text" |
Add private reflection | innerboard add "Struggling with auth tokens" |
innerboard list |
View saved reflections | innerboard list --limit 10 |
innerboard delete <id> |
Delete specific reflection | innerboard delete 5 --force |
innerboard del <id> |
Alias for delete | innerboard del 5 --force |
innerboard clear |
Delete ALL reflections | innerboard clear --force |
innerboard status |
Vault status and stats | innerboard status |
| Command | Description | Example |
|---|---|---|
innerboard record |
Record terminal session | innerboard record --name standup |
innerboard prep |
Generate meeting prep | innerboard prep --show-sre |
innerboard models |
List available AI models | innerboard models |
| Command | Description | Example |
|---|---|---|
innerboard --help |
Show all commands | innerboard add --help |
- Your data stays local: No information leaves your device
- Encryption: All reflections are encrypted at rest
- Network isolation: Only local Ollama connections allowed during processing
- Password protection: Your master key is password-protected
- Secure deletion: Sensitive files are overwritten before deletion
# Create your config file
cp env.example .env# See what models are available
innerboard models
# Expected output:
# Available
# Ollama Models
# βββββββββββββββ
# β Model Name β
# β‘ββββββββββββββ©
# β gpt-oss:20b β
# βββββββββββββββ
#
# Current model: gpt-oss:20b"externally-managed-environment" error:
# The quickstart script automatically handles this by creating a virtual environment
# If you need to do it manually:
python3 -m venv innerboard_venv
source innerboard_venv/bin/activate
pip install -e .Quick start script fails:
# Try manual installation
git clone https://github.com/ramper-labs/InnerBoard-local.git
cd InnerBoard-local
innerboard setupDocker setup fails:
# Check Docker status
docker --version
docker-compose --version
# Clean and retry
docker system prune -a
./docker-setup.shRun detailed health check:
innerboard health --detailedCommon health check issues:
- β Python Environment: Upgrade to Python 3.8+
- β Ollama Service: Run
ollama serve - β AI Model: Run
ollama pull gpt-oss:20b - β Vault System: Run
innerboard init
# Solution: Pull the model
ollama pull gpt-oss:20b
# Check if Ollama is running
ollama list# Set environment variable
export INNERBOARD_KEY_PASSWORD="your_password"
# Or use the --password flag
innerboard status --password your_password# Some commands don't support --password flag
# Use environment variable instead:
INNERBOARD_KEY_PASSWORD="your_password" innerboard list# Ensure write permissions in current directory
chmod 755 .# Restart Ollama service
# macOS:
brew services restart ollama
# Linux:
sudo systemctl restart ollama
# Or start manually:
ollama serve# General help
innerboard --help
# Command-specific help
innerboard add --help
innerboard list --helpBackup your vault:
# Your encrypted data is in:
# - vault.db (encrypted reflections)
# - vault.key (encrypted master key)
cp vault.db vault.db.backup
cp vault.key vault.key.backup| Component | Tests | Status |
|---|---|---|
| Security | 16 tests | β All passing |
| Caching | 17 tests | β All passing |
| Integration | 11 tests | β All passing |
| Network Safety | 6 tests | β All passing |
| Total | 50 tests | β All passing |
- β First-run setup and recording flow
- β AI analysis with real Ollama models
- β Security validation (blocks SQL injection, XSS)
- β Configuration file loading
- β Error handling and recovery
- β Data persistence and integrity
- β Performance optimization
- β Multi-reflection management
Licensed under the Apache License 2.0. See LICENSE for details.
- Powered by Ollama for local model serving
- Inspired by the need for private, offline meeting preparation
- Built with Click for CLI, Rich for UI, Cryptography for security
~/project $ kubectl get pods -n production
NAME READY STATUS RESTARTS AGE
auth-service-7d4b8f9c6-x2p9q 1/1 Running 0 2d
~/project $ kubectl describe pod auth-service-7d4b8f9c6-x2p9q -n production
# ... detailed pod information ...
~/project $ git log --oneline -5
a1b2c3d Fix authentication token validation
e4f5g6h Update user service endpoints
# ... more git history ...[
{
"summary": "Investigated authentication service in production, reviewed recent commits for token validation fixes.",
"key_successes": [
{
"desc": "Successfully accessed production Kubernetes cluster",
"specifics": "kubectl get pods -n production",
"adjacent_context": "Authentication service pod running stable for 2 days"
},
{
"desc": "Identified recent authentication fixes in git history",
"specifics": "git log --oneline -5",
"adjacent_context": "Found commit a1b2c3d fixing token validation"
}
],
"blockers": [],
"resources": [
"kubectl describe pod auth-service-7d4b8f9c6-x2p9q -n production",
"Git commit a1b2c3d - authentication token validation fix"
]
}
]{
"team_update": [
"β
Successfully accessed production K8s cluster and verified auth service stability",
"π Reviewed recent authentication fixes, found token validation improvements",
"π― Next focus: testing token validation changes in staging environment"
],
"manager_update": [
"Team has good access to production systems and can debug effectively",
"Recent authentication fixes appear stable with 2+ days uptime",
"No current blockers, investigation skills developing well"
],
"recommendations": [
"Test the authentication token validation fix in staging environment",
"Document the debugging process for future authentication issues",
"Schedule follow-up review of authentication service architecture"
]
}Your offline meeting prep companion that stays on your device. β¨