π―If you find our repository useful, please cite the corresponding paper (Coming Soon) and Repository:
TFLlib is a comprehensive library for trustworthy federated learning research based on PFLlib. It provides a unified framework to evaluate federated learning algorithms under various trustworthiness threats including Backdoor attacks, Byzantine attacks, Membership Inference Attacks (MIA), Label Inference Attacks (LIA) and Gradient Inversion Attacks (GIA).
- Classic FL Algorithms: FedAvg, FedProx, MOON, SCAFFOLD, FedDyn, FedNTD, FedGen
- Extensible Architecture: Easy to implement and integrate new FL algorithms
- Computer Vision: CIFAR-10, CIFAR-100, TinyImageNet, FEMNIST
- Natural Language Processing: IMDB, AGNews, Sent140
- Tabular Data: Adult, Heart, Credit Card, Texas100, Purchase100
- Time Series: UCI-HAR
- Various Data Distribution Settings: IID, Non-IID (Dirichlet, Pathological, etc.)
- CNN Models: LeNet, SimpleCNN, ResNet series, VGG, MobileNet, ShuffleNet
- NLP Models: LSTM, BERT variants, ALBERT, ELECTRA, MobileBERT, MiniLM, TinyBERT
- Other Models: Logistic Regression, HAR-CNN, DeepSpeech
- Backdoor Attacks: DBA, A3FL, CerP, EdgeCase, Neurotoxin, Replace
- Byzantine Attacks: LIE, Fang, IPM, Label Flip, Median Tailored, Min-Max, Noise, Sign Flip, SignGuard, Update Flip
- Membership Inference Attacks: Nasr, Shokri, Zari, ML-Leaks
- Label Inference Attacks: Various LIA methods
- Gradient Inversion Attacks: DLG, Invert Gradients, See Through Gradients, LOKI, RobFed
Coming soon...
- System Heterogeneity: Simulate varying computation capabilities of devices
- Communication Heterogeneity: Model unstable network conditions
- Device Availability: Handle dynamic client availability
- Efficiently utilize multiple GPUs for large-scale federated learning simulations
- Accelerate both training and evaluation processes
TFLlib/
βββ flcore/
β βββ clients/ # Client-side implementations
β βββ fedatasets/ # Federated datasets
β β βββ other/ # Various dataset implementations
β β βββ utils/ # Dataset utilities
β βββ models/ # Model architectures
β βββ optimizers/ # Federated optimizers
β βββ security/ # Security components
β β βββ attack/ # Various attack implementations
β β β βββ poison/ # Poisoning attacks
β β β βββ privacy/ # Privacy attacks
β β βββ defense/ # Defense mechanisms
β βββ servers/ # Server-side implementations
β βββ simulation/ # Real-world simulation modules
β βββ utils/ # Utility functions
βββ main.py # Main entry point
βββ run_exp_*.py # Experiment scripts
βββ config.py # Configuration parsing
- βοΈ Add the parameter configurations for each experiment script
- βοΈ Provide .toml configuration files for easy experiment reproduction
- βοΈ Polish the documentation and add more tutorials
- βοΈ Provide datasets and pretrained models download scripts
- βοΈ Add more defense mechanisms
# Clone the repository
git clone https://github.com/xaddwell/TFLlib.git
cd TFLlib
# Install dependencies
pip install -r requirements.txtRun federated learning experiments with various configurations:
# Basic FedAvg on CIFAR-10
python main.py --algorithm FedAvg --data_name CIFAR10 --model_name resnet18
# Run with non-IID data setting
python main.py --algorithm FedAvg --data_name CIFAR10 --model_name resnet18 --split_type diri --cncntrtn 0.5
# Run with system heterogeneity simulation
python main.py --algorithm FedAvg --data_name CIFAR10 --model_name resnet18 --dev_hetero 0.5 --comm_hetero 0.5We provide several experiment scripts for reproducing results:
# Backdoor attack experiments
python run_exp_backdoor.py
# Byzantine attack experiments
python run_exp_byzantine.py
# Privacy attack experiments
python run_exp_inversion.py
python run_exp_lia.py| Parameter | Description | Default |
|---|---|---|
--algorithm |
FL algorithm to use | FedAvg |
--data_name |
Dataset to use | CIFAR10 |
--model_name |
Model architecture | resnet18 |
--num_clients |
Total number of clients | 100 |
--join_ratio |
Fraction of clients participating in each round | 0.1 |
--local_epochs |
Number of local training epochs | 2 |
--global_rounds |
Number of global communication rounds | 500 |
--split_type |
Data distribution type | iid |
--dev_hetero |
Device heterogeneity level (0-1) | 0.5 |
--comm_hetero |
Communication heterogeneity level (0-1) | 0.5 |
Coming soon...
Coming soon...
TFLlib provides realistic simulation capabilities:
- Device Heterogeneity: Clients have different computational capabilities
- Communication Heterogeneity: Network conditions vary among clients
- Client Availability: Dynamic client participation patterns
These features enable researchers to evaluate FL algorithms under practical deployment conditions.
Coming soon...
Coming soon...
If you find TFLlib useful in your research, please cite:
@misc{chen2025tfllib,
title={TFLlib: Trustworthy Federated Learning Library and Benchmark},
author={Jiahao Chen, Zhiming Zhao and Jianqing Zhang},
year={2025},
url={https://github.com/xaddwell/TFLlib}
}This project is licensed under the Apache License - see the LICENSE file for details.
We thank all the researchers who contribute to the development of TFLlib. Especially, we thank the benchmark PFLlib, provided by Jianqing Zhang.

