Skip to content

cronos3k/chainlightning

Repository files navigation

UDP Bonding - Smart Packet-Level Bandwidth Aggregation

A custom UDP-based packet bonding system with adaptive link selection that prioritizes low-latency links and intelligently overflows to high-bandwidth links.

Architecture

LAN Clients → TUN (192.168.50.1) → CLIENT (smart striping) → [5 UDP streams] → SERVER → Internet
                                           ↓
                          Link Selection Algorithm:
                          1. Low load  → Use ADSL only (low latency)
                          2. High load → Saturate ADSL, then add Starlink

Key Features

1. Smart Adaptive Algorithm

  • NOT round-robin - dynamically selects best link per packet
  • Prioritizes low-latency links (ADSL) for light traffic
  • Automatically adds high-bandwidth links (Starlink) when needed
  • Real-time bandwidth and latency monitoring

2. Link Scoring System

Each link gets a score based on:

  • Priority (wg0=highest, wg1-4=lower)
  • RTT (measured every 100ms via heartbeats)
  • Saturation (huge penalty if >80% utilized)
  • Packet loss

Lower score = better link → gets selected

3. Heartbeat Protocol

  • Sends probes every 100ms to all links
  • Measures real-time RTT
  • Tracks bandwidth utilization
  • Updates link metrics continuously

Network Configuration

Client (Router/Gateway)

  • TUN interface: 192.168.50.1/24
  • 5 WireGuard links (example):
    • wg0: 10.200.0.1 → Priority 1 (ADSL, ~10Mbps, ~20ms)
    • wg1: 10.200.1.1 → Priority 2 (Starlink, ~70Mbps, ~80ms)
    • wg2: 10.200.2.1 → Priority 2 (Starlink, ~70Mbps, ~80ms)
    • wg3: 10.200.3.1 → Priority 2 (Starlink, ~70Mbps, ~80ms)
    • wg4: 10.200.4.1 → Priority 2 (Starlink, ~70Mbps, ~80ms)

Server (VPS)

  • TUN interface: 10.99.0.1/24
  • 5 UDP listeners: Ports 9001-9005
  • Packet reassembly: Handles out-of-order delivery
  • NAT/forwarding: Routes to internet

Protocol Specification

Packet Format

┌──────────────┬──────────┬──────────────┬─────────────┐
│ seq_num (8B) │ flags(1B)│ len (2B)     │ payload     │
└──────────────┴──────────┴──────────────┴─────────────┘

Flags:

  • 0x01 - Heartbeat packet (for RTT measurement)
  • 0x02 - Data packet (IP packet payload)

Link Selection Algorithm

fn select_link() -> usize {
    let mut scores = [];
    for link in links {
        score =
            (priority * 1000) +        // Priority weight
            (rtt_ms * 10) +             // Latency penalty
            (saturated ? 10000 : 0) +   // Saturation penalty
            (packet_loss * 5000);       // Loss penalty

        scores.push((link_id, score));
    }

    scores.sort();
    return scores[0].link_id;  // Return lowest score
}

Build & Deploy

1. Build

cd udp-bonding
cargo build --release --bin server
cargo build --release --bin client

2. Deploy Server (VPS)

# Copy binary
scp target/release/server root@YOUR_SERVER:/usr/local/bin/bonding-server

# On VPS:
sudo /usr/local/bin/bonding-server

# Configure routing
sudo ip route add 0.0.0.0/0 via 10.99.0.2 dev tun-bond table 100
sudo ip rule add from 10.99.0.0/24 lookup 100
sudo sysctl -w net.ipv4.ip_forward=1

# Enable NAT
sudo iptables -t nat -A POSTROUTING -s 10.99.0.0/24 -j MASQUERADE

3. Deploy Client (Router)

# Copy binary
scp target/release/client user@YOUR_ROUTER:/usr/local/bin/bonding-client

# On router:
sudo /usr/local/bin/bonding-client

# Route LAN traffic through bonding (adjust YOUR_LAN_SUBNET)
sudo ip route add default via 192.168.50.2 dev tun-bond table 200
sudo ip rule add from YOUR_LAN_SUBNET lookup 200

Monitoring

Client Logs

=== Link Metrics ===
  Link0 (10.200.0.1) - RTT: 22.5ms, BW: 8.2/10.0 Mbps (82%), Score: 1225, SATURATED
  Link1 (10.200.1.1) - RTT: 78.3ms, BW: 45.1/70.0 Mbps (64%), Score: 2783, OK
  Link2 (10.200.2.1) - RTT: 81.2ms, BW: 43.8/70.0 Mbps (63%), Score: 2812, OK
  ...

What this tells you:

  • Link0 (ADSL) is saturated at 8.2Mbps → System adds Starlink links
  • Link1-2 are active with good throughput
  • RTT measurements show ~20ms (ADSL) vs ~80ms (Starlink)

Server Logs

=== Link Statistics ===
  10.200.0.1:54321 - Packets: 15234, Bytes: 18432156, Last seq: 15234, RTT: 22ms

Performance Characteristics

Scenario Links Used Expected Throughput Expected Latency
Light browsing wg0 only ~10 Mbps ~20ms (excellent)
Large download wg0 + wg1-4 ~290 Mbps ~70ms (acceptable)
Gaming + download wg0 for game, wg1-4 for download Game: ~20ms latency

Advantages Over Alternatives

Solution Packet-level? Smart selection? Complexity Our System
HAProxy ❌ (per-connection) ❌ (round-robin) Medium
MPTCP ⚠️ (kernel) High (kernel config)
OpenMPTCProuter Very High (full VM)
UDP Bonding ✅ (adaptive) Low (2 binaries)

Future Enhancements

  1. FEC (Forward Error Correction): Add redundancy for lossy links
  2. QoS tagging: Detect game traffic (small packets, high frequency) and pin to low-latency links
  3. Dynamic link discovery: Auto-detect new WireGuard interfaces
  4. Compression: Optional payload compression for low-bandwidth links

About

different take on link aggregation for network bandwidth bundling

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published