The "Hello World" of elite DevOps tooling that thinks
Local AI. Local Logs. Local Genius.
Kube is a revolutionary DevOps tool that bridges the gap between cryptic container logs and human understanding. It connects to your local Docker or Kubernetes environment, extracts logs from failing containers/pods, and feeds them to a local AI model (Ollama) for crystal-clear explanations.
Why this will blow your mind:
- 🛡️ Privacy First: All AI processing happens locally - no data leaves your machine
- 🤖 AI-Powered Insights: Turns "wall of logs" into actionable explanations
- 🎨 Hacker Dashboard UI: Beautiful terminal interface with real-time status
- ⚡ Lightning Fast: Asynchronous processing with live updates
- 🐳 Multi-Platform: Docker and Kubernetes support out of the box
- Local AI Integration: Uses Ollama with Llama 3 for explanations
- Docker Support: Analyze any running container
- Kubernetes Support: Pod log analysis with namespace support
- Terminal UI: Bubble Tea-powered animated interface
- Configurable: CLI flags and config file support
- Fix Suggestions: AI-generated kubectl/docker commands to resolve issues
- Streaming Analysis: Watch the AI "think" in real-time
Before you begin, ensure you have the following installed:
-
Go (version 1.21 or later) - Download here
-
Docker - Installation guide
-
Ollama - Installation guide
After installing Ollama, pull the Llama 3 model:
ollama run llama3
go install github.com/ez0000001000000/Kube@latestgit clone https://github.com/ez0000001000000/Kube.git
cd Kube
go mod tidy
go build -o bin/kube .docker build -t kube .
docker run --rm kube --container my-appkubectl apply -f k8s/deployment.yamlNote: Ensure the Kubernetes deployment has access to the Docker daemon for log analysis. Adjust volumes and security context as needed.
kube --container my-failing-appkube \
--container nginx \
--tail 100 \
--model llama3 \
--verbosekube \
--k8s \
--pod my-web-app \
--namespace production \
--tail 50Create a config.yaml:
ollama:
model: llama3
host: http://localhost:11434
docker:
tail: 20
k8s:
namespace: defaultThen run:
kube --config config.yaml --container my-app| Flag | Description | Default |
|---|---|---|
--container |
Docker container name | - |
--pod |
Kubernetes pod name | - |
--namespace |
Kubernetes namespace | default |
--k8s |
Use Kubernetes instead of Docker | false |
--tail |
Number of log lines to fetch | 20 |
--model |
Ollama model to use | llama3 |
--config |
Path to config file | - |
--verbose |
Enable verbose output | false |
--help |
Show help message | - |
When you run Kube, you'll see:
- Connection Phase: "🔗 Connecting to Docker..."
- Log Extraction: "📄 Fetching last 20 lines..."
- AI Analysis: "🤖 Analyzing with Llama 3..."
- Results: Clear explanation + fix suggestions
The terminal transforms into a high-end hacker dashboard with animated status updates!
Run the test suite:
go test ./...We love contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and ensure tests pass
- Format your code:
make fmt - Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Bubble Tea for the amazing TUI framework
- Ollama for local AI capabilities
- Docker Go SDK for container interactions
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Built with ❤️ for the DevOps community
⭐ Star this repo if it helps you debug faster!