Skip to content

Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"

Notifications You must be signed in to change notification settings

MMMMMYY/NDSS21-Model-Poisoning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

A General Framework to Evaluate Robustness of Aggregation Algorithms in Federated Learning

by Virat Shejwalkar and Amir Houmansadr published at ISOC Network and Distributed Systems Security Symposium, (NDSS) 2021

Motivation

Result Highlights

Understanding the code and using the notebooks

We have given the code in the form of notebooks which are self-explanatory in that the description of each cell is given in the respective notebooks. To run the code, please clone/download the repo and simply start running the notebooks in usual manner. Various evaluation dimensions are below

  • Datasets included are CIFAR10 (covers iid and cross-silo FL cases) and FEMNIST (covers non-iid and cross-device FL cases).
  • We have given codes for five state-of-the-art aggregation algorithms, which give theoretical convergence guarantees: Krum, Multi-krum, Bulyan, Trimmed-mean, Median
  • Baseline model poisoning attacks Fang and LIE.
  • Our state-of-the-art model poisoning attacks, Aggregation-tailored attacks and Aggregation-agnsotic attacks, for the above mentioned aggregation algorithms. For any other aggregation algorithms, the code allows for simple plug-and-attack framework.

Requirements

About

Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.5%
  • Python 2.5%