Skip to content

TFLlib-Trustworthy Federated Learning Library and Benchmark

License

Notifications You must be signed in to change notification settings

xaddwell/TFLlib

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

25 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

icon TFLlib: Trustworthy Federated Learning Library and Benchmark

Apache License 2.0

🎯If you find our repository useful, please cite the corresponding paper (Coming Soon) and Repository:

TFLlib is a comprehensive library for trustworthy federated learning research based on PFLlib. It provides a unified framework to evaluate federated learning algorithms under various trustworthiness threats including Backdoor attacks, Byzantine attacks, Membership Inference Attacks (MIA), Label Inference Attacks (LIA) and Gradient Inversion Attacks (GIA).

Framework

Key Features

Comprehensive FL Algorithms Support

  • Classic FL Algorithms: FedAvg, FedProx, MOON, SCAFFOLD, FedDyn, FedNTD, FedGen
  • Extensible Architecture: Easy to implement and integrate new FL algorithms

Diverse Dataset Support

  • Computer Vision: CIFAR-10, CIFAR-100, TinyImageNet, FEMNIST
  • Natural Language Processing: IMDB, AGNews, Sent140
  • Tabular Data: Adult, Heart, Credit Card, Texas100, Purchase100
  • Time Series: UCI-HAR
  • Various Data Distribution Settings: IID, Non-IID (Dirichlet, Pathological, etc.)

Rich Model Zoo

  • CNN Models: LeNet, SimpleCNN, ResNet series, VGG, MobileNet, ShuffleNet
  • NLP Models: LSTM, BERT variants, ALBERT, ELECTRA, MobileBERT, MiniLM, TinyBERT
  • Other Models: Logistic Regression, HAR-CNN, DeepSpeech

Security Threats & Attacks (updating)

Poisoning Attacks

Privacy Attacks

  • Membership Inference Attacks: Nasr, Shokri, Zari, ML-Leaks
  • Label Inference Attacks: Various LIA methods
  • Gradient Inversion Attacks: DLG, Invert Gradients, See Through Gradients, LOKI, RobFed

Defense Mechanisms

Coming soon...

Real-world Simulation

  • System Heterogeneity: Simulate varying computation capabilities of devices
  • Communication Heterogeneity: Model unstable network conditions
  • Device Availability: Handle dynamic client availability

Multi-GPU Support

  • Efficiently utilize multiple GPUs for large-scale federated learning simulations
  • Accelerate both training and evaluation processes

Architecture

TFLlib/
β”œβ”€β”€ flcore/
β”‚   β”œβ”€β”€ clients/              # Client-side implementations
β”‚   β”œβ”€β”€ fedatasets/           # Federated datasets
β”‚   β”‚   β”œβ”€β”€ other/            # Various dataset implementations
β”‚   β”‚   └── utils/            # Dataset utilities
β”‚   β”œβ”€β”€ models/               # Model architectures
β”‚   β”œβ”€β”€ optimizers/           # Federated optimizers
β”‚   β”œβ”€β”€ security/             # Security components
β”‚   β”‚   β”œβ”€β”€ attack/           # Various attack implementations
β”‚   β”‚   β”‚   β”œβ”€β”€ poison/       # Poisoning attacks
β”‚   β”‚   β”‚   └── privacy/      # Privacy attacks
β”‚   β”‚   └── defense/          # Defense mechanisms
β”‚   β”œβ”€β”€ servers/              # Server-side implementations
β”‚   β”œβ”€β”€ simulation/           # Real-world simulation modules
β”‚   └── utils/                # Utility functions
β”œβ”€β”€ main.py                   # Main entry point
β”œβ”€β”€ run_exp_*.py              # Experiment scripts
└── config.py                 # Configuration parsing

ToDo List

  • ⭕️ Add the parameter configurations for each experiment script
  • ⭕️ Provide .toml configuration files for easy experiment reproduction
  • ⭕️ Polish the documentation and add more tutorials
  • ⭕️ Provide datasets and pretrained models download scripts
  • ⭕️ Add more defense mechanisms

Getting Started [TODO]

Installation

# Clone the repository
git clone https://github.com/xaddwell/TFLlib.git
cd TFLlib

# Install dependencies
pip install -r requirements.txt

Basic Usage

Run federated learning experiments with various configurations:

# Basic FedAvg on CIFAR-10
python main.py --algorithm FedAvg --data_name CIFAR10 --model_name resnet18

# Run with non-IID data setting
python main.py --algorithm FedAvg --data_name CIFAR10 --model_name resnet18 --split_type diri --cncntrtn 0.5

# Run with system heterogeneity simulation
python main.py --algorithm FedAvg --data_name CIFAR10 --model_name resnet18 --dev_hetero 0.5 --comm_hetero 0.5

Pre-configured Experiments

We provide several experiment scripts for reproducing results:

# Backdoor attack experiments
python run_exp_backdoor.py

# Byzantine attack experiments
python run_exp_byzantine.py

# Privacy attack experiments
python run_exp_inversion.py
python run_exp_lia.py

Key Parameters for FL Training

Parameter Description Default
--algorithm FL algorithm to use FedAvg
--data_name Dataset to use CIFAR10
--model_name Model architecture resnet18
--num_clients Total number of clients 100
--join_ratio Fraction of clients participating in each round 0.1
--local_epochs Number of local training epochs 2
--global_rounds Number of global communication rounds 500
--split_type Data distribution type iid
--dev_hetero Device heterogeneity level (0-1) 0.5
--comm_hetero Communication heterogeneity level (0-1) 0.5

Key Parameters for Client Attack [TODO]

Coming soon...

Key Parameters for Server Attack [TODO]

Coming soon...

Real-world Environment Simulation

TFLlib provides realistic simulation capabilities:

  1. Device Heterogeneity: Clients have different computational capabilities
  2. Communication Heterogeneity: Network conditions vary among clients
  3. Client Availability: Dynamic client participation patterns

These features enable researchers to evaluate FL algorithms under practical deployment conditions.

Multi-GPU Training

Coming soon...

Extending TFLlib with you own Attack & Defense Mechanisms [TODO]

Coming soon...

Citation

If you find TFLlib useful in your research, please cite:

@misc{chen2025tfllib,
  title={TFLlib: Trustworthy Federated Learning Library and Benchmark},
  author={Jiahao Chen, Zhiming Zhao and Jianqing Zhang},
  year={2025},
  url={https://github.com/xaddwell/TFLlib}
}

License

This project is licensed under the Apache License - see the LICENSE file for details.

Acknowledgments

We thank all the researchers who contribute to the development of TFLlib. Especially, we thank the benchmark PFLlib, provided by Jianqing Zhang.

About

TFLlib-Trustworthy Federated Learning Library and Benchmark

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages