PyGIP is a Python library designed for experimenting with graph-based model extraction attacks and defenses. It provides a modular framework to implement and test attack and defense strategies on graph datasets.
To get started with PyGIP, set up your environment by installing the required dependencies:
pip install -r reqs.txtEnsure you have Python installed (version 3.8 or higher recommended) along with the necessary libraries listed
in reqs.txt.
Specifically, using following command to install dgl 2.2.1 and ensure your pytorch==2.3.0.
pip install dgl==2.2.1 -f https://data.dgl.ai/wheels/torch-2.3/repo.htmlcuda
pip install dgl==2.2.1 -f https://data.dgl.ai/wheels/torch-2.3/cu118/repo.htmlHere’s a simple example to launch a Model Extraction Attack using PyGIP:
from datasets import Cora
from models.attack import ModelExtractionAttack0
# Load the Cora dataset
dataset = Cora()
# Initialize the attack with a sampling ratio of 0.25
mea = ModelExtractionAttack0(dataset, 0.25)
# Execute the attack
mea.attack()This code loads the Cora dataset, initializes a basic model extraction attack (ModelExtractionAttack0), and runs the
attack with a specified sampling ratio.
And a simple example to run a Defense method against Model Extraction Attack:
from datasets import Cora
from models.defense import RandomWM
# Load the Cora dataset
dataset = Cora()
# Initialize the attack with a sampling ratio of 0.25
mead = RandomWM(dataset, 0.25)
# Execute the attack
mead.defend()which runs the Random Watermarking Graph to defend against MEA.
If you want to use cuda, please set environment variable:
export PYGIP_DEVICE=cuda:0Refer to Implementation Guideline
Refer to Contributors Guideline
MIT License
For questions or contributions, please contact blshen@fsu.edu.