- Docker Desktop - Make sure it's installed and running
- API Keys - You'll need:
- Hugging Face API key (free): https://huggingface.co/settings/tokens
- GitHub Personal Access Token: https://github.com/settings/tokens (needs
reposcope) - PagerDuty API key: From your PagerDuty account settings
-
Clone/Navigate to the project:
cd OpsLens -
Run setup script:
./setup.sh
Or manually:
# Copy secrets template cp secrets.env.example secrets.env # Edit secrets.env and add your API keys nano secrets.env # or use your preferred editor # Start services docker-compose up -d # Initialize database docker-compose exec backend python -m app.db.init_db # Generate synthetic data (optional but recommended for demo) docker-compose exec backend python -m app.data.generate_synthetic
-
Access the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
- Navigate to http://localhost:3000
- You'll see a list of incidents (if you generated synthetic data)
- Click on any incident to view details
- The incident page has 4 tabs:
- Timeline: Chronological events related to the incident
- Hypotheses: AI-generated root cause hypotheses
- Evidence: Logs, metrics, screenshots, etc.
- Actions: Actionable next steps
- Click "Generate Timeline" to fetch data from GitHub and PagerDuty
- Click "Generate Hypotheses" to create root cause hypotheses based on evidence
- Go to the Evidence tab
- Upload a dashboard screenshot
- The VLM will analyze it and extract insights
- Check Docker Desktop is running
- Check ports 3000, 8000, 5432, 6379 are not in use
- Check
docker-compose logsfor errors
- Make sure Postgres container is healthy:
docker-compose ps - Reinitialize:
docker-compose exec backend python -m app.db.init_db
- Verify your API keys in
secrets.env - For Hugging Face, make sure the token has read access
- For GitHub, ensure the token has
reposcope
- Check backend is running:
curl http://localhost:8000/health - Check frontend logs:
docker-compose logs frontend
docker-compose logs -f [service_name]
# e.g., docker-compose logs -f backenddocker-compose restart [service_name]docker-compose downdocker-compose down -v
docker-compose up -d- Explore the API documentation at http://localhost:8000/docs
- Try creating a new incident via the API
- Upload a screenshot to test VLM functionality
- Search runbooks using the RAG system