A decentralized network system where nodes collaborate to process LLM computations using NATS messaging, end-to-end encryption, and MPC-style secret sharing for aggregation.
The project is organized as a uv workspace with four packages:
Core types and configuration shared across all packages.
Computation: Pydantic model for computation requests with tuple-basedresponse_schemaSecureComputeParams: Configuration for MPC/FHE computation (modulus_bits, precision, function)ComputationResult: Pydantic model for computation resultsTupleResponse: Response wrapper for tuple of integer values- Shared constants (NATS config)
Dependencies: pydantic only
Secure computation module for MPC operations and fixed-point encoding.
- Fixed-point encoding/decoding (float β int with configurable precision)
- Secret share generation (additive sharing mod 2^modulus_bits)
- Local share aggregation (element-wise summation)
- Interactive Open Protocol for aggregators to reveal final results
- Transport abstraction (
ProtocolSession) for protocol communication
Key Design: Uses modulus_bits (e.g., 64) instead of the actual modulus value (2^64) for serialization compatibility with msgpack.
Dependencies: bee-hive-core only
Independent handler framework for writing and testing computation handlers.
@handlerdecorator for creating computation handlers- Handler validation and testing utilities (
nectar test) - Handler daemon process with file watching
- Complete CLI for handler management (launch, attach, detach, logs)
- Zero dependencies on flower (completely decoupled)
Dependencies: bee-hive-core, watchdog, click, loguru
CLI: nectar command with 7 subcommands
π See nectar documentation β
Network layer with identity management, encryption, and node management.
- Node classes (BaseNode, LightNode, HeavyNode)
- NATS-based P2P communication with E2E encryption
- Identity management and key storage
- File-based computation dispatch (writes
.pending, reads.complete) - CLI for node management
- No dependency on nectar (handlers attached at runtime)
Dependencies: bee-hive-core, nats-py, cryptography, loguru, msgpack, click
CLI: bee-hive command with 6 subcommands
π See flower documentation β
- Handlers are independent services: Run in separate processes with their own dependencies
- File-based communication: Flower nodes write
.pendingfiles, handlers write.completefiles - Multi-alias support: One handler can serve multiple nodes simultaneously
- Dynamic attachment: Attach/detach handlers without restarting nodes
1. Write handler with @handler decorator
2. Test: nectar test my_handler.py
3. Launch: nectar launch my_handler.py handler_name
4. Attach: nectar attach handler_name alice
5. Node writes .pending β Handler processes β Handler writes .complete
Production (~/.bee-hive):
- Default behavior when no flag specified
- Persistent data across sessions
- Suitable for long-running production nodes
Testing (./sandbox):
- Explicit
--data-dir ./sandboxflag - Isolated test data in project directory
- Clean separation from production
- Easy to reset with
./scripts/reset.sh
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install Docker Desktop and run the daemon
# Required for NATS server
# Clone the repository
cd bee-hive
# Create virtual environment
uv venv -p 3.12
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install packages in development mode
uv pip install -e packages/bee-hive-flower
uv pip install -e packages/bee-hive-nectar
# bee-hive-core is automatically installed as a dependencyAfter making changes to the codebase, reinstall packages to pick up modifications:
# Force reinstall all packages in editable mode
uv pip install --force-reinstall -e packages/*
# This reinstalls:
# - packages/bee-hive-core (shared types and config)
# - packages/bee-hive-nectar (handler framework)
# - packages/bee-hive-flower (network layer and CLI)The project includes automated integration tests in ./scripts/:
# Run full integration test (starts NATS, registers nodes, attaches handlers, runs computations)
./scripts/test.sh
# Expected output:
# - 6 nodes registered (alice, bob, charlie, dave, eve, frank)
# - 1 handler attached to target nodes
# - 2 test computations with different aggregator/generator configurationsTest Environment: Tests run in isolated ./sandbox directory (separate from production ~/.bee-hive) and uses a localhost deployed server.
# 1. Start NATS server (required first)
./scripts/start_nats.sh
# 2. Register test nodes (alice, bob, charlie, dave, eve, frank)
./scripts/start_nodes.sh
# 3. Attach example handler to all nodes
./scripts/attach_handlers.sh
# 4. View node status
uv run bee-hive --data-dir ./sandbox list
# 5. View handler status
uv run nectar --data-dir ./sandbox view# Complete cleanup: stops all processes, removes sandbox, resets NATS
./scripts/reset.sh
# Then start fresh:
./scripts/test.shOrphaned Processes:
# Check for orphaned node processes
ps aux | grep -E "HeavyNode|LightNode"
# Kill all orphaned processes
./scripts/reset.sh # Includes aggressive process cleanupNode Count Issues:
# Verify correct node count (should be 6 for tests)
uv run bee-hive --data-dir ./sandbox list | grep -c "π’ running"
# If incorrect, run reset and restart
./scripts/reset.sh
./scripts/test.shNATS Server Issues:
# Check NATS server status
docker ps | grep bee-hive-server
# Restart NATS server
docker-compose restart
# Full reset (removes NATS data volume)
./scripts/reset.shThis section shows how the packages work together in a complete workflow.
docker-compose up -d# Create handler file
cat > my_handler.py <<'EOF'
from nectar import handler, Computation
from typing import Tuple
@handler
def analyze(comp: Computation) -> Tuple[int, ...]:
# Handler must return a tuple matching computation.response_length
return (len(comp.query) * 2,)
EOF
# Test it locally (no network required)
uv run nectar test my_handler.py
# Output: β
Handler test passed! Result: (84,)# Register a heavy node (aggregator)
uv run bee-hive register
# Enter: heavy, h1, h1@example.com, password
# Register a light node (worker)
uv run bee-hive register
# Enter: light, alice, alice@example.com, password
# List nodes
uv run bee-hive list
# Shows: 2 nodes (h1, alice)# Launch handler as daemon
uv run nectar launch my_handler.py sentiment_v1
# Attach to nodes
uv run nectar attach sentiment_v1 alice
uv run nectar attach sentiment_v1 h1
# View handler status
uv run nectar view
# Shows: sentiment_v1 (running) watching alice, h1# Submit computation
uv run bee-hive submit "Test query" \
--proposer alice \
--aggregators h1 \
--targets alice,h1 \
--deadline 30
# Watch handler process it
uv run nectar logs sentiment_v1
# Check results (after deadline)
cat ~/.bee-hive/alice/data/final_*.jsonUser Command
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β bee-hive submit (flower CLI) β
β - Creates computation with generator + aggregators β
β - Sends to aggregators via IPC β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β HeavyNode (flower) β
β - First aggregator distributes to targets via NATS β
β - Generator provides preprocessing material (MPC: none)β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LightNode (flower) β
β - Receives computation via NATS β
β - Writes .pending file β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Handler Daemon (nectar) β
β - Watches for .pending files β
β - Executes @handler function β
β - Writes .complete file (tuple of integers) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LightNode (flower + swarm) β
β - Reads .complete file β
β - Encodes integers to fixed-point (swarm) β
β - Generates secret shares (swarm) β
β - Sends one share-tuple to each aggregator β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β HeavyNode (flower + swarm) β
β - Aggregates local shares (swarm) β
β - Runs Open Protocol with other aggregators (swarm) β
β - First aggregator decodes and sends to proposer β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Proposer (flower) β
β - Receives decoded float result β
β - Writes result to disk β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Generator: Heavy node that provides preprocessing material (for MPC: none needed; for future FHE: key generation). Can be separate from aggregators.
- Aggregators: Heavy nodes that receive shares, run local aggregation, and participate in the Open Protocol to reveal results.
- Example:
--aggregators alice,bob --generator frank- frank provides preprocessing, alice and bob aggregate shares.
Handlers return tuples of integers with length specified by computation.response_length:
from nectar import handler, Computation
from typing import Tuple
import random
@handler
def my_handler(comp: Computation) -> Tuple[int, ...]:
# comp.response_schema defines expected tuple structure, e.g., (1,) or (1, 1)
# comp.response_length gives the number of values expected
result = [len(comp.query)] # First value: query length
for _ in range(comp.response_length - 1):
result.append(random.randint(0, 100)) # Additional values
return tuple(result)Submitting with custom schema:
# Single value (default): response_schema=(1,)
uv run bee-hive submit "Query" --proposer alice --aggregators h1 --targets alice
# Two values: response_schema=(1, 1)
uv run bee-hive submit "Query" --proposer alice --aggregators h1 --targets alice --response-schema "1,1"MPC Aggregation: Handler results are encoded to fixed-point integers (with configurable precision), secret-shared among aggregators, and aggregated via the interactive Open Protocol. The first aggregator decodes the final result back to floats and sends it to the proposer.
β
Complete Decoupling: Handlers run independently from network layer
β
Zero Downtime: Attach/detach handlers without restarting nodes
β
Multi-Alias Support: One handler can serve multiple nodes
β
Local Testing: Test handlers without running network
β
Independent Dependencies: Each handler can have its own dependencies
β
Process Isolation: Handler crashes don't affect network
β
One Handler Per Alias: Enforced to prevent conflicts
β
Graceful Degradation: Nodes work without handlers (accumulate .pending files)
β
Cross-Machine Support: Nodes can run on different physical machines
β
E2E Encryption: Hybrid RSA + AES encryption for all messages
~/.bee-hive/
βββ nectar/ # Independent handler service
β βββ handlers.json # Handler metadata
β βββ handlers/ # IPC sockets
β β βββ sentiment_v1.sock
β βββ logs/ # Handler logs
β βββ sentiment_v1.log
β
βββ alice/ # Node data (flower)
β βββ identities.json # Node's view of network
β βββ keys/
β β βββ private_key.pem
β β βββ public_key.pem
β βββ data/
β βββ local.db
β βββ node.log
β βββ computation/ # Handler watches this
β β βββ *.pending # Written by node
β β βββ *.complete # Written by handler
β βββ final_*.json # Aggregated results
β
βββ bob/ # Another node
βββ ...
See examples/ directory:
example_handlers/handler_query_length.py- Example handler used in tests- More examples coming soon
- README.md (this file): Integration, testing, and quick start
- packages/bee-hive-nectar/README.md: Nectar-specific documentation
- packages/bee-hive-flower/README.md: Flower-specific documentation
MIT