A dual-verifiable framework for federated learning using zero-knowledge proofs to ensure training integrity and aggregation correctness.
- Dual ZKP Verification: Client-side zk-STARKs + Server-side zk-SNARKs
- FedJSCM Aggregation: Momentum-based federated optimization
- Dynamic Proof Rigor: Adaptive proof complexity based on training stability
- Parameter Quantization: ZKP-compatible weight compression
Client Training + zk-STARK Proof β FL Server + zk-SNARK Proof β Verified Model
The system provides dual verification:
- Clients generate zk-STARK proofs of correct local training
- Server generates zk-SNARK proofs of correct aggregation
# Install the package with uv (recommended)
uv pip install secure-fl
# Or install from source with uv
git clone https://github.com/krishantt/secure-fl
cd secure-fl
uv pip install -e .
# For development with all dependencies
uv sync --all-extrasInstall zero-knowledge proof tools:
# Automated setup with make (recommended)
make setup-zkp
# Or manual setup:
# 1. Install Rust
curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
# 2. Install Circom
git clone https://github.com/iden3/circom.git
cd circom && cargo install --path circom
# 3. Install SnarkJS
npm install -g snarkjs
# Verify setup
uv run secure-fl check-zkpfrom secure_fl import SecureFlowerServer, create_server_strategy
import torch.nn as nn
# Define model
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(784, 10)
def forward(self, x):
return self.fc(x.view(-1, 784))
# Create server with ZKP verification
strategy = create_server_strategy(
model_fn=SimpleModel,
enable_zkp=True,
proof_rigor="high"
)
server = SecureFlowerServer(strategy=strategy)
server.start(num_rounds=10)Create and use a configuration file:
# Create example config
uv run secure-fl create-config
# Edit config.yaml as needed
# Then use it:from secure_fl import create_client, start_client
from torchvision import datasets, transforms
# Load data
transform = transforms.Compose([transforms.ToTensor()])
dataset = datasets.MNIST('./data', train=True, transform=transform)
# Create secure client
client = create_client(
client_id="client_1",
model_fn=SimpleModel,
train_data=dataset,
enable_zkp=True
)
# Connect to server
start_client(client, "localhost:8080")# Start server
uv run secure-fl-server --config config.yaml
# Start client
uv run secure-fl-client --server localhost:8080 --dataset mnist --client-id client_1
# Check system status
uv run secure-fl check-zkp- Client-side (zk-STARKs): Prove correct SGD computation using Cairo circuits
- Server-side (zk-SNARKs): Prove correct FedJSCM aggregation using Circom circuits
Momentum-based federated averaging:
w_{t+1} = w_t - Ξ·_g * (Ξ² * m_t + (1-Ξ²) * βF_t)
where βF_t is the federated gradient and m_t is the momentum buffer.
Automatically adjusts ZKP complexity based on training stability:
- High stability: Reduced proof complexity for efficiency
- Low stability: Increased proof rigor for security
Create a config.yaml:
server:
host: "localhost"
port: 8080
num_rounds: 10
strategy:
min_fit_clients: 2
fraction_fit: 1.0
momentum: 0.9
zkp:
enable_zkp: true
proof_rigor: "high"
quantize_weights: true
quantization_bits: 8git clone https://github.com/krishantt/secure-fl
cd secure-fl
# Complete development setup
make dev
# Or manually with uv
uv sync --all-extras
make setup-zkp# Run tests
make test
make test-quick # Fast tests with early exit
make test-cov # With coverage report
# Code quality
make lint # Check with ruff
make format # Format code
make type-check # Run mypy
make check # All quality checks
# Development workflow
make demo # Run demonstration
make clean # Clean artifactsRun benchmarks and experiments:
# Basic demo
make demo
# or: uv run python experiments/demo.py
# Performance benchmark
uv run python experiments/benchmark.py
# Custom training
uv run python experiments/train.py --config experiments/config.yaml
# Check environment
make env-infosecure-fl/
βββ secure_fl/ # Main package
β βββ client.py # FL client with zk-STARK proofs
β βββ server.py # FL server with zk-SNARK proofs
β βββ aggregation.py # FedJSCM algorithm
β βββ proof_manager.py # ZKP generation/verification
β βββ quantization.py # Parameter compression
β βββ utils.py # Utilities
βββ proofs/ # ZKP circuits
β βββ client_circuits/ # zk-STARK (Cairo)
β βββ server/ # zk-SNARK (Circom)
βββ experiments/ # Research experiments
βββ tests/ # Test suite
βββ docs/ # Documentation
- Fork the repository
- Set up development environment:
make dev - Create a feature branch
- Make your changes with proper type hints
- Add tests and ensure coverage
- Run quality checks:
make check - Test your changes:
make test - Submit a pull request
- Use type hints throughout
- Follow the established error handling patterns
- Add proper logging with context
- Write tests for new functionality
- Update documentation as needed
MIT License - see LICENSE for details.
@misc{timilsina2024secure,
title={Secure FL: Dual-Verifiable Framework for Federated Learning using Zero-Knowledge Proofs},
author={Timilsina, Krishant and Paudel, Bindu},
year={2024},
url={https://github.com/krishantt/secure-fl}
}- Flower framework for federated learning infrastructure
- Circom and Cairo for zero-knowledge proof systems
- The federated learning and cryptography research communities