Skip to content

Latest commit

 

History

History
184 lines (134 loc) · 4.73 KB

File metadata and controls

184 lines (134 loc) · 4.73 KB

PolyAgent

A production-ready AI agent platform that scales. Built with Go, Rust, and Python.

What It Does

PolyAgent lets you build AI agents that actually work in production:

  • Secure execution - Python code runs in isolated WASI sandbox
  • Budget control - Set token limits to prevent runaway costs
  • Multi-agent workflows - Coordinate multiple agents automatically
  • Provider agnostic - Works with OpenAI, Anthropic, Google, and others
  • Debugging - Replay any failed workflow step-by-step
  • Observability - Prometheus metrics and distributed tracing

Quick Start

# Clone and setup
git clone https://github.com/Kocoro-lab/PolyAgent.git
cd PolyAgent
make setup-env

# Add your API key
echo "OPENAI_API_KEY=your-key-here" >> .env

# Start everything
make dev

# Test it works
make smoke

Submit Your First Task

# Using REST API
curl -X POST http://localhost:8080/api/v1/tasks \
  -H "Content-Type: application/json" \
  -d '{"query": "Analyze the sentiment of this text: PolyAgent is great!"}'

# Using script
./scripts/submit_task.sh "What is 2+2?"

Core Features

Secure Code Execution

  • Python code runs in WASI sandbox
  • No access to host system
  • Memory and time limits enforced

Multi-Agent Coordination

  • Automatic task decomposition
  • Parallel execution where possible
  • Built-in error handling and retries

Budget Management

  • Set hard token limits per user/session
  • Real-time usage tracking
  • Cost alerts and cutoffs

Production Ready

  • Horizontal scaling with Temporal workflows
  • PostgreSQL for state, Redis for sessions
  • Comprehensive monitoring and alerting

API Examples

REST API

# Submit task
curl -X POST http://localhost:8080/api/v1/tasks \
  -H "Content-Type: application/json" \
  -d '{"query": "Your task here", "session_id": "session-123"}'

# Check status
curl http://localhost:8080/api/v1/tasks/task-id-123

# Stream events
curl -N http://localhost:8081/stream/sse?workflow_id=task-id-123

gRPC

# Submit via gRPC
grpcurl -plaintext \
  -d '{"query":"Your task","sessionId":"session-123"}' \
  localhost:50052 polyagent.orchestrator.OrchestratorService/SubmitTask

Architecture

┌─────────────┐     ┌──────────────┐     ┌─────────────┐
│   Client    │────▶│ Orchestrator │────▶│ Agent Core  │
│  (HTTP/gRPC)│     │     (Go)     │     │   (Rust)    │
└─────────────┘     └──────────────┘     └─────────────┘
                           │                     │
                           ▼                     ▼
                    ┌──────────────┐     ┌─────────────┐
                    │   Temporal   │     │ LLM Service │
                    │   Workflows  │     │  (Python)   │
                    └──────────────┘     └─────────────┘

Configuration

Key configuration files:

  • config/polyagent.yaml - Main platform config
  • .env - API keys and secrets
  • config/models.yaml - LLM provider settings

Development

# Run tests
make test

# Format code
make fmt

# Run linters
make lint

# View logs
make logs

# Check service health
make ps

Debugging

When something goes wrong:

# Find the workflow ID from logs
grep ERROR logs/orchestrator.log

# Replay the workflow
./scripts/replay_workflow.sh workflow-id-123

# Check Temporal UI
open http://localhost:8088

Production Setup

  1. Set proper API keys in .env
  2. Configure resource limits in config/polyagent.yaml
  3. Set up monitoring (Prometheus + Grafana included)
  4. Review security policies in config/policies/

Documentation

Requirements

  • Docker and Docker Compose
  • Go 1.21+ (for development)
  • Rust 1.70+ (for development)
  • Python 3.11+ (for development)

License

MIT - see LICENSE file.

Support


Built for production. Ready to scale.

This project is completely based on Shannon and iterated according to personal circumstances.