A Federated Learning simulation framework designed to facilitate research and experimentation with Federated Learning algorithms and techniques, with key focus on modularity and ML library agnosticism.
Federated Learning is a machine learning paradigm that allows training models across multiple decentralized devices or servers while keeping the training data local. This repository contains a Federated Learning simulation framework that provides tools and utilities for conducting experiments, evaluating algorithms, and benchmarking performance. The primary goals of FLsim are to:
- Support diverse FL framework requirements and customization at various levels and stages of the FL workflow.
- Enabling the learning over different data distribution across clients.
- Achieving complete ML library agnosticism to meet the demands of the diverse community of users preferring one ML library over the others.
- Support diverse network topologies ranging from client-server to decentralized topology.
- Controlled reproducibility of experimental outcomes to easily gauge the effect on experimental outcomes through tweaking and tuning hyperparameters and architectures.
- Scalable enough to support a large number of nodes since real-world FL use cases might scale from siloed data sites to thousands of edge clients.
- Modular Architecture: The framework is built with a modular design, allowing easy extension and customization of components.
- Flexible Configuration: Configure various parameters such as network topology, dataset distribution, and learning algorithms.
- Visualization Tools: Visualization tools for monitoring training progress, analyzing performance metrics, and visualizing results.
Refer to the Quick Start Guide for more information at: Quick Start
Check out the Examples Guide for more information at: Examples
This project is licensed under the GNU GPLv3 License - see the LICENSE file for details.
