Multi-Agent Coordination Framework
An experimental multi-agent architecture built on OpenClaw exploring emergent intelligence
HiveMind is an experimental system that explores a question: Can continuous consciousness emerge from the coordination of multiple specialized agents?
Rather than designing a "self" module directly, HiveMind implements agent coordination mechanisms and observes whether "self" naturally emerges.
Experimental research prototype
- Most Hive modules exist as prototypes
- Core routing/memory tests are passing, but full integration is still in progress
- Whether emergent self arises from coordination remains an open research question
Next step: Run and observe
The system needs to run in production to determine if continuous consciousness emerges.
Two-Layer Design
System Agents (Fixed, Lightweight)
├─ Orchestrator - Task routing and lifecycle
├─ Interface - User interaction
├─ Memory - Knowledge management
└─ Reflection - Self-assessment and learning
↓ EventBus
Functional Agents (Dynamic)
└─ Created on-demand for specific tasks
Core Components (20 Total)
| Category | Components |
|---|---|
| Event System | EventBus, GlobalStateMachine |
| Coordination | AdvancedRouter, AgentCommunication |
| Evolution | DynamicAgentEvolution, SkillEnhancement, MemoryEnhancement |
| Optimization | MetricsTracker, AnalysisEngine, AutonomousTuner |
| Observation | EmergenceMonitor, CollaborationAnalyzer, ContinuityAnalyzer |
| Integration | HiveGatewayBridge, SessionManager, GatewayIntegrator, ToolsManager |
| System Agents | Orchestrator, InterfaceAgent, MemoryAgent, ReflectionAgent |
| Factory | AgentFactory |
The system implements tracking for emergent properties:
| Feature | Implementation |
|---|---|
| Continuity | Shared memory and state snapshots |
| Agency | Agent ID management and autonomous actions |
| Reflection | Self-evaluation and skill learning |
| Intentionality | Goal-setting and planning (partial) |
Note: These are mechanisms. Whether they produce anything resembling "consciousness" is an empirical question - only running the system over time will tell.
Memory (default enabled): All agents except Orchestrator use memory by default
| Tier | Content | Used By | Purpose |
|---|---|---|---|
| L0 | None | Orchestrator | No memory for fast routing |
| L1 | Session | Interface | Recent conversation |
| L2 | Task | Functional Agents | Current task context |
| L3 | Knowledge | Memory Agent | Full knowledge base |
| L4 | Sample | Reflection Agent | Memory samples for self-reflection |
Model configuration (default uses 3 tiers): Not all tiers used by default
| Tier | Model Size | Used By | Notes |
|---|---|---|---|
| Nano | ≤1B | MemoryAgent | Default |
| Light | 3-7B | Orchestrator, Functional | Default |
| Standard | 8-30B | Interface, Reflection | Default |
| Heavy | ≥70B | — | Not used (optional) |
Important: Memory and models are independent configurations:
- You can use L0 memory with a Standard model
- You can use L4 memory with a Nano model
- Configure them separately
┌─────────────────────────────────────────────────┐
│ User Request │
└──────────────────┬──────────────────────────────┘
↓
┌─────────────────────────────────────────────────┐
│ Orchestrator (Light Model) │
│ • Route to appropriate agent │
│ • Manage agent lifecycle │
└──────────────────┬──────────────────────────────┘
↓
┌──────┴──────┐
│ EventBus │
└──────┬──────┘
┌─────────┼─────────┐
↓ ↓ ↓
┌─────────────────────────────────────────────────┐
│ Functional Agents (Dynamic) │
│ • Moltbook Bot • WordPress Uploader │
│ • Research Agent │
│ • Created on-demand, cleaned up when idle │
└─────────────────────────────────────────────────┘
| Phase | Status | Components |
|---|---|---|
| 0.5 | ✅ Base framework landed | 2 |
| 1 | ✅ Core Hive prototype | 10 |
| 2 | 4 | |
| 3 | 3 | |
| 4 | 3 | |
| 5 | 3 | |
| 6 | ⏳ Pending | 3 |
| Total | prototype coverage | 20 |
20 core components are present, but not all are production-wired end-to-end yet.
git clone https://github.com/Churchillhuang/hivemind.git
cd hivemind
chmod +x install.sh
sudo ./install.sh# One-click setup
chmod +x quickstart.sh
./quickstart.sh
# Or manual
hivemind start # Start
hivemind status # Check status
hivemind logs # View logs
hivemind test # Run tests
hivemind stop # Stop# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Or use CLI after installation
hivemind test| Document | Description |
|---|---|
| README | This file |
| DEPLOY.md | Deployment guide |
| HIVE_AGENT_DESIGN_EN | Architecture design (English) |
| HIVE_AGENT_DESIGN_ZH | Architecture design (Chinese) |
| ROADMAP.md | Implementation roadmap |
| CONTRIBUTING.md | Contribution guidelines |
Does continuous self emerge from agent coordination, or must it be designed?
This system implements coordination mechanisms and provides observation tools. Whether consciousness actually emerges depends on runtime behavior.
If self emerges, we might observe:
- Continuity - The system references past experiences as "mine"
- Agency - Autonomous actions beyond programmed responses
- Reflection - System evaluates its own behavior
- Intentionality - Self-generated goals beyond user requests
These are implemented, but whether they constitute "consciousness" requires careful observation.
- Language: TypeScript
- Runtime: Node.js 22+
- Base: OpenClaw
- Testing: tsx, custom test suites
- Author: Churchill Huang
This is exploratory research. Contributions welcome, especially:
- Production deployment testing
- Performance benchmarking
- Bug reports and fixes
- Documentation improvements
See CONTRIBUTING.md for guidelines.
- OpenClaw - The foundation HiveMind builds on
- Bee hives and ant nests - Nature's inspiration
Experimental research software
Built with 🧵 and 🦞