A powerful, modular agent framework for building AI-powered agents in Go
Minion is a standalone, production-ready agent framework that provides everything you need to build intelligent AI agents with custom behaviors, tools, and capabilities.
Minion is a complete system for creating, managing, and executing AI agents. It provides:
- Complete Agent Lifecycle - Create, configure, execute, and monitor agents
- Pluggable Architecture - Swap storage, LLM providers, and behaviors easily
- Production Ready - Thread-safe, observable, and battle-tested patterns
- Framework Agnostic - Use standalone or integrate with existing systems
Minion is a standalone framework that can be used in any Go project for building multi-agent AI systems.
- π€ Agent Management - Complete CRUD operations with metrics and activity tracking
- π§ Pluggable Behaviors - Define custom processing logic for specialized agents
- π οΈ Tool System - Extensible tools with capability-based filtering
- πΎ Storage Abstraction - In-memory, PostgreSQL (with full transaction support), or custom backends
- π Built-in Observability - Metrics, activity logs, and performance tracking
- β‘ Thread-Safe - Concurrent operations with proper synchronization
- π¨ Highly Extensible - Easy to add new behaviors, tools, and providers
- π€ Multi-Agent Collaboration - Research-based orchestrator pattern with specialized workers
- π KQML Protocol - Industry-standard inter-agent communication
- π Task Decomposition - LLM-powered planning and task breakdown
- π· Specialized Workers - Coder, Analyst, Researcher, Writer, Reviewer agents
- π OpenAI - GPT-4, GPT-3.5-turbo support
- π Anthropic - Claude models support
- π TupleLeap - TupleLeap AI integration
- π Custom Providers - Easy to add your own
- π HTTP Authentication - Bearer, API Key, and OAuth support for MCP
- π Connection Pooling - Efficient resource management with graceful shutdown
- β Schema Validation - JSON Schema validation with regex pattern support
- π‘οΈ Error Handling - Safe environment config with error returns (no panics)
- π Chain System - LangChain-style chains for RAG and workflows
- π LLM Request Validation - Built-in validation for temperature, tokens, and model
- π₯ Health Checks - Provider health verification with
HealthCheckProviderinterface - π‘οΈ Safe Type Assertions - Helper functions to prevent runtime panics
- β‘ Goroutine Safety - Context-aware streaming with proper cleanup
- π OpenTelemetry Tracing - Full distributed tracing with Jaeger/OTLP export
- π Prometheus Metrics - Production-ready metrics with HTTP endpoint
- π Agent Traceability - Track every agent execution, LLM call, and tool invocation
- π Multi-Agent Observability - Trace orchestrator planning, worker assignment, task completion
- π― Graceful Shutdown - Proper span flushing before process exit
- π Execution Snapshots - Capture complete state at checkpoints
- βͺ Time-Travel Debugging - Navigate forward/backward through execution
- π What-If Analysis - Branch execution and compare outcomes
- π₯οΈ Debug Studio TUI - Interactive terminal UI for debugging
- π Debug API Server - HTTP API for external debugging tools
- π State Reconstruction - Rebuild session, task, workspace at any point
go get github.com/ranganaths/minionpackage main
import (
"context"
"fmt"
"log"
"os"
"github.com/ranganaths/minion/core"
"github.com/ranganaths/minion/models"
"github.com/ranganaths/minion/storage"
"github.com/ranganaths/minion/llm"
)
func main() {
// 1. Initialize Minion
framework := core.NewFramework(
core.WithStorage(storage.NewInMemory()),
core.WithLLMProvider(llm.NewOpenAI(os.Getenv("OPENAI_API_KEY"))),
)
defer framework.Close()
// 2. Create an agent
agent, err := framework.CreateAgent(context.Background(), &models.CreateAgentRequest{
Name: "My First Minion",
Description: "A helpful AI assistant",
BehaviorType: "default",
Config: models.AgentConfig{
LLMProvider: "openai",
LLMModel: "gpt-4",
Temperature: 0.7,
MaxTokens: 500,
},
})
if err != nil {
log.Fatal(err)
}
// 3. Activate the agent
activeStatus := models.StatusActive
agent, _ = framework.UpdateAgent(context.Background(), agent.ID, &models.UpdateAgentRequest{
Status: &activeStatus,
})
// 4. Execute!
output, err := framework.Execute(context.Background(), agent.ID, &models.Input{
Raw: "What is 2 + 2?",
Type: "text",
})
if err != nil {
log.Fatal(err)
}
fmt.Printf("Agent: %v\n", output.Result)
}βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Minion Framework β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ β
β β Agent β β Behavior β β Tools β β
β β Registry β β Registry β β Registry β β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ β
β β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ β
β β Storage β β LLM β β Metrics β β
β β Backend β β Provider β β Tracker β β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Agents are autonomous entities that process input using LLMs and tools:
agent, _ := framework.CreateAgent(ctx, &models.CreateAgentRequest{
Name: "Customer Service Agent",
Description: "Handles customer inquiries",
BehaviorType: "conversational",
Capabilities: []string{"sentiment_analysis", "knowledge_base"},
})Behaviors define how agents process information:
type SentimentBehavior struct{}
func (b *SentimentBehavior) GetSystemPrompt(agent *models.Agent) string {
return "You are a sentiment analysis expert..."
}
func (b *SentimentBehavior) ProcessInput(ctx context.Context, agent *models.Agent, input *models.Input) (*models.ProcessedInput, error) {
// Pre-process input before LLM
return &models.ProcessedInput{
Original: input,
Processed: enhancedInput,
Instructions: "Analyze sentiment...",
}, nil
}
func (b *SentimentBehavior) ProcessOutput(ctx context.Context, agent *models.Agent, output *models.Output) (*models.ProcessedOutput, error) {
// Post-process LLM output
return &models.ProcessedOutput{
Original: output,
Processed: enhancedOutput,
}, nil
}
// Register the behavior
framework.RegisterBehavior("sentiment_analysis", &SentimentBehavior{})Tools are capabilities that agents can use:
type CalculatorTool struct{}
func (t *CalculatorTool) Name() string {
return "calculator"
}
func (t *CalculatorTool) Execute(ctx context.Context, input *models.ToolInput) (*models.ToolOutput, error) {
result := performCalculation(input.Params)
return &models.ToolOutput{
ToolName: "calculator",
Success: true,
Result: result,
}, nil
}
func (t *CalculatorTool) CanExecute(agent *models.Agent) bool {
// Only available to agents with "math" capability
for _, cap := range agent.Capabilities {
if cap == "math" {
return true
}
}
return false
}
// Register the tool
framework.RegisterTool(&CalculatorTool{})Track agent performance and activity:
// Get metrics
metrics, _ := framework.GetMetrics(ctx, agentID)
fmt.Printf("Total: %d | Success: %d | Failed: %d\n",
metrics.TotalExecutions,
metrics.SuccessfulExecutions,
metrics.FailedExecutions)
fmt.Printf("Avg time: %.2fms\n", metrics.AvgExecutionTime)
// Get recent activities
activities, _ := framework.GetActivities(ctx, agentID, 10)
for _, activity := range activities {
fmt.Printf("[%s] %s - %s (%dms)\n",
activity.CreatedAt,
activity.Action,
activity.Status,
activity.Duration)
}Enable distributed tracing for full agent observability:
import "github.com/Ranganaths/minion/observability"
// Initialize tracing
err := observability.InitGlobalTracer(observability.TracingConfig{
Enabled: true,
ServiceName: "my-agent-service",
Environment: "production",
Exporter: "otlp", // or "jaeger", "stdout"
OTLPEndpoint: "localhost:4317",
SamplingRatio: 0.1,
})
if err != nil {
log.Fatal(err)
}
defer observability.GracefulShutdown(30 * time.Second)
// All agent executions are now traced automatically!
output, err := framework.Execute(ctx, agentID, input)
// Traces include: agent.execute, llm.openai.gpt-4, tool.*, etc.Expose production metrics via HTTP endpoint:
import "github.com/Ranganaths/minion/metrics"
// Initialize Prometheus metrics
promMetrics := metrics.InitPrometheusMetrics(&metrics.PrometheusConfig{
Namespace: "minion",
EnableGoCollector: true,
EnableProcessCollector: true,
})
// Expose /metrics endpoint
http.Handle("/metrics", metrics.MetricsHandler())
http.ListenAndServe(":9090", nil)Available Metrics:
minion_agent_executions_total- Agent execution countminion_llm_calls_total- LLM API callsminion_llm_tokens_total- Total tokens usedminion_llm_call_duration_seconds- LLM latency histogramminion_multiagent_tasks_total- Multi-agent task countminion_tool_executions_total- Tool invocation count
import "github.com/Ranganaths/minion/llm"
provider := llm.NewOpenAI(os.Getenv("OPENAI_API_KEY"))
framework := core.NewFramework(
core.WithLLMProvider(provider),
)
// With request validation (recommended for production)
req := &llm.CompletionRequest{
Model: "gpt-4",
UserPrompt: "Hello!",
Temperature: 0.7,
MaxTokens: 100,
}
if err := req.Validate(); err != nil {
log.Fatalf("Invalid request: %v", err)
}provider := llm.NewAnthropic(os.Getenv("ANTHROPIC_API_KEY"))
framework := core.NewFramework(
core.WithLLMProvider(provider),
)provider := llm.NewTupleLeap(os.Getenv("TUPLELEAP_API_KEY"))
framework := core.NewFramework(
core.WithLLMProvider(provider),
)type MyLLMProvider struct{}
func (p *MyLLMProvider) Name() string {
return "my-provider"
}
func (p *MyLLMProvider) GenerateCompletion(ctx context.Context, req *llm.CompletionRequest) (*llm.CompletionResponse, error) {
// Validate request first (recommended)
if err := req.Validate(); err != nil {
return nil, err
}
// Your implementation
return &llm.CompletionResponse{
Text: response,
TokensUsed: tokens,
Model: "my-model",
}, nil
}
func (p *MyLLMProvider) GenerateChat(ctx context.Context, req *llm.ChatRequest) (*llm.ChatResponse, error) {
if err := req.Validate(); err != nil {
return nil, err
}
// Your implementation
}
// Optional: Implement HealthCheckProvider for health monitoring
func (p *MyLLMProvider) HealthCheck(ctx context.Context) error {
// Check connectivity to your LLM service
return nil
}
framework := core.NewFramework(
core.WithLLMProvider(&MyLLMProvider{}),
)import "github.com/Ranganaths/minion/storage"
store := storage.NewInMemory()
framework := core.NewFramework(core.WithStorage(store))type MyStorage struct{}
func (s *MyStorage) Create(ctx context.Context, agent *models.Agent) error {
// Your implementation
}
// Implement other storage.Store methods...
framework := core.NewFramework(
core.WithStorage(&MyStorage{}),
)Check out the examples/ directory for 13 comprehensive examples:
examples/basic/- Simple agent creation and executionexamples/with_tools/- Custom tools with capability filteringexamples/custom_behavior/- Specialized agent behaviors
examples/multiagent-basic/- Basic multi-agent coordinator usageexamples/multiagent-custom/- Custom worker agentsexamples/llm_worker/- LLM-powered worker agents
examples/sales_agent/- Sales analyst with visualization toolsexamples/sales-automation/- Automated sales workflowsexamples/business_automation/- Business process automationexamples/customer-support/- Customer support agentexamples/devops-automation/- DevOps task automation
examples/domain_tools/- Domain-specific tools (marketing, sales)examples/tupleleap_example/- TupleLeap LLM provider integration
examples/tracing/- Agent tracing and trace analysis APIexamples/observability/- OpenTelemetry + Prometheus production setup
Run an example:
cd minion/examples/basic
export OPENAI_API_KEY="your-key"
go run main.goagent, _ := framework.CreateAgent(ctx, &models.CreateAgentRequest{
Name: "Support Bot",
BehaviorType: "customer_service",
Capabilities: []string{"ticket_creation", "knowledge_base", "sentiment_analysis"},
})agent, _ := framework.CreateAgent(ctx, &models.CreateAgentRequest{
Name: "Data Analyst",
BehaviorType: "analytical",
Capabilities: []string{"sql_generation", "visualization", "forecasting"},
})agent, _ := framework.CreateAgent(ctx, &models.CreateAgentRequest{
Name: "Code Reviewer",
BehaviorType: "code_review",
Capabilities: []string{"static_analysis", "security_scan", "best_practices"},
})func TestMinion(t *testing.T) {
// Use in-memory storage for tests
framework := core.NewFramework(
core.WithStorage(storage.NewInMemory()),
)
agent, err := framework.CreateAgent(context.Background(), &models.CreateAgentRequest{
Name: "Test Agent",
})
if err != nil {
t.Fatalf("Failed to create agent: %v", err)
}
// Test execution
output, err := framework.Execute(context.Background(), agent.ID, &models.Input{
Raw: "test input",
})
assert.NoError(t, err)
assert.NotNil(t, output)
}Minion now includes a production-ready multi-agent framework based on cutting-edge research:
- Research Foundation: Implements "Survey of AI Agent Protocols" (arXiv:2504.16736) and Microsoft's "Magentic-One" architecture (arXiv:2411.04468)
- Orchestrator Pattern: LLM-powered task decomposition and coordination
- Specialized Workers: Pre-built agents for coding, analysis, research, writing, and review
- KQML Protocol: Industry-standard agent communication
- Task & Progress Ledgers: Comprehensive execution tracking
- Custom Workers: Easily extend with domain-specific agents
Quick Start:
// Initialize multi-agent system
coordinator := multiagent.NewCoordinator(llmProvider, nil)
coordinator.Initialize(ctx)
// Execute complex task
result, err := coordinator.ExecuteTask(ctx, &multiagent.TaskRequest{
Name: "Generate Sales Report",
Description: "Analyze data and create comprehensive report",
Type: "analysis",
Priority: multiagent.PriorityHigh,
})Documentation:
Minion includes a powerful debugging system with time-travel capabilities, similar to LangGraph Studio:
- Execution Snapshots: Automatically capture state at 22+ checkpoint types
- Timeline Navigation: Step forward/backward through any execution
- State Reconstruction: Rebuild complete state at any point in time
- What-If Analysis: Create branches with modifications and compare outcomes
- Debug API: HTTP API for external tools and integrations
- Terminal UI: Interactive TUI built with Bubble Tea
import (
"github.com/Ranganaths/minion/debug/snapshot"
"github.com/Ranganaths/minion/debug/recorder"
"github.com/Ranganaths/minion/debug/timetravel"
)
// Create snapshot store
store := snapshot.NewMemorySnapshotStore()
// Create recorder with hooks
rec := recorder.NewExecutionRecorder(store, recorder.DefaultRecorderConfig())
hooks := recorder.NewFrameworkHooks(rec)
// Record agent execution
agentHooks := hooks.ForAgent("my-agent")
agentHooks.OnExecutionStart(ctx, input)
// Record tool calls, LLM calls, decisions...
hooks.ForTool("my_tool").OnStart(ctx, input)
hooks.ForTool("my_tool").OnEnd(ctx, output, nil)
// End execution
agentHooks.OnExecutionEnd(ctx, output, nil)
// Time-travel through execution
timeline, _ := timetravel.NewExecutionTimeline(ctx, store, rec.GetExecutionID())
timeline.StepBackward() // Go back
timeline.JumpToNextError() // Find errors
timeline.JumpToCheckpoint(snapshot.CheckpointLLMCallStart) // Find LLM calls
// What-if analysis
branching := timetravel.NewBranchingEngine(store)
comparison, _ := branching.WhatIf(ctx, executionID, 5, &timetravel.Modification{
Type: "input",
Value: "modified_input",
})# Start the debug API server
go run ./examples/debug-timetravel/main.go apiEndpoints:
GET /api/v1/executions- List all executionsGET /api/v1/timeline/:id- Get execution timelinePOST /api/v1/step- Step through timelinePOST /api/v1/replay- Replay from checkpointPOST /api/v1/branches- Create execution branchPOST /api/v1/what-if- Run what-if analysis
# Launch interactive debugger
go run ./examples/debug-timetravel/main.go tuiKeyboard shortcuts:
j/k- Navigate up/downh/l- Step backward/forward in timelinee/E- Jump to next/previous errors- Open state inspector?- Show help
Documentation:
- OpenTelemetry Tracing - Full distributed tracing with Jaeger/OTLP export
- Prometheus Metrics - Production metrics with HTTP endpoint
- Agent Traceability - Track every execution, LLM call, and tool invocation
- Debug & Time-Travel - Execution snapshots, timeline navigation, what-if analysis
- Debug Studio TUI - Interactive terminal debugger with Bubble Tea
- Debug API Server - HTTP API for debugging and time-travel operations
- Multi-agent collaboration - Research-based orchestrator with specialized workers
- Multiple LLM providers - OpenAI, Anthropic, TupleLeap
- PostgreSQL storage - Full transaction support
- MCP Integration - Model Context Protocol with HTTP authentication
- Chain System - LangChain-style RAG and workflow chains
- Production hardening - Connection pooling, graceful shutdown, error handling
- Streaming responses (partial - chain streaming complete)
- Web UI for agent management
- Plugin system
- Google Gemini provider
- Debug & Time-Travel System - Complete debugging infrastructure with snapshots
- Execution Recorder - Capture checkpoints during agent/tool/LLM execution
- State Reconstructor - Rebuild state at any point in execution
- Branching Engine - What-if analysis with execution branching
- Debug API - REST API with timeline navigation, replay, and branching
- Debug Studio TUI - Terminal UI with execution list, timeline, state inspector
- LLM Request Validation -
Validate()andWithDefaults()methods - Health Check Interface -
HealthCheckProviderfor provider health monitoring - Safe Type Assertions -
GetInt,GetFloat,GetBool,GetMaphelpers - Goroutine Leak Prevention - Context-aware streaming in all chains
- Non-panicking Config -
RequireString,RequireInt,RequireBoolmethods - Race Condition Fixes - Atomic operations for thread-safe worker agents
MIT License - see LICENSE file for details
- Documentation: Check the
examples/directory and inline code comments - Issues: GitHub Issues
- Discussions: GitHub Discussions
Minion - Your loyal AI agent framework