Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,7 @@ dist/

#virtual environments folder
.venv
temp_results/
robustness_results/
*.pth
*.pt
120 changes: 10 additions & 110 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,118 +1,18 @@
# PyGIP
# PyGIP - GNN Ownership Verification Module

[![PyPI - Version](https://img.shields.io/pypi/v/PyGIP)](https://pypi.org/project/PyGIP)
[![Build Status](https://img.shields.io/github/actions/workflow/status/LabRAI/PyGIP/docs.yml)](https://github.com/LabRAI/PyGIP/actions)
[![License](https://img.shields.io/github/license/LabRAI/PyGIP.svg)](https://github.com/LabRAI/PyGIP/blob/main/LICENSE)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/pygip)](https://github.com/LabRAI/PyGIP)
[![Issues](https://img.shields.io/github/issues/LabRAI/PyGIP)](https://github.com/LabRAI/PyGIP)
[![Pull Requests](https://img.shields.io/github/issues-pr/LabRAI/PyGIP)](https://github.com/LabRAI/PyGIP)
[![Stars](https://img.shields.io/github/stars/LabRAI/PyGIP)](https://github.com/LabRAI/PyGIP)
[![GitHub forks](https://img.shields.io/github/forks/LabRAI/PyGIP)](https://github.com/LabRAI/PyGIP)
This repository contains the integration of **Graph Neural Network (GNN) ownership verification** experiments into the PyGIP framework. It provides modular and extensible implementations for attacks and defenses on GNNs, following the guidelines of the PyGIP framework.

PyGIP is a Python library designed for experimenting with graph-based model extraction attacks and defenses. It provides
a modular framework to implement and test attack and defense strategies on graph datasets.
---

## How to Cite
## πŸ“‹ Overview

If you find it useful, please considering cite the following work:
This module allows users to:

```bibtex
@article{li2025intellectual,
title={Intellectual Property in Graph-Based Machine Learning as a Service: Attacks and Defenses},
author={Li, Lincan and Shen, Bolin and Zhao, Chenxi and Sun, Yuxiang and Zhao, Kaixiang and Pan, Shirui and Dong, Yushun},
journal={arXiv preprint arXiv:2508.19641},
year={2025}
}
```
- Evaluate ownership verification on GNN models (GCN, GAT, GraphSAGE).
- Run experiments under **inductive** and **transductive** settings.
- Easily extend the framework with new datasets, attacks, or defenses.

> **Note:** Large model weights (`benign_model.pth`) and result folders (`temp_results/`, `robustness_results/`) are excluded to keep the repository clean.

## Installation
---

PyGIP supports both CPU and GPU environments. Make sure you have Python installed (version >= 3.8, <3.13).

### Base Installation

First, install the core package:

```bash
pip install PyGIP
```

This will install PyGIP with minimal dependencies.

### CPU Version

```bash
pip install "PyGIP[torch,dgl]" \
--index-url https://download.pytorch.org/whl/cpu \
--extra-index-url https://pypi.org/simple \
-f https://data.dgl.ai/wheels/repo.html
```

### GPU Version (CUDA 12.1)

```bash
pip install "PyGIP[torch,dgl]" \
--index-url https://download.pytorch.org/whl/cu121 \
--extra-index-url https://pypi.org/simple \
-f https://data.dgl.ai/wheels/torch-2.3/cu121/repo.html
```

## Quick Start

Here’s a simple example to launch a Model Extraction Attack using PyGIP:

```python
from datasets import Cora
from models.attack import ModelExtractionAttack0

# Load the Cora dataset
dataset = Cora()

# Initialize the attack with a sampling ratio of 0.25
mea = ModelExtractionAttack0(dataset, 0.25)

# Execute the attack
mea.attack()
```

This code loads the Cora dataset, initializes a basic model extraction attack (`ModelExtractionAttack0`), and runs the
attack with a specified sampling ratio.

And a simple example to run a Defense method against Model Extraction Attack:

```python
from datasets import Cora
from models.defense import RandomWM

# Load the Cora dataset
dataset = Cora()

# Initialize the attack with a sampling ratio of 0.25
med = RandomWM(dataset, 0.25)

# Execute the defense
med.defend()
```

which runs the Random Watermarking Graph to defend against MEA.

If you want to use cuda, please set environment variable:

```shell
export PYGIP_DEVICE=cuda:0
```

## Implementation & Contributors Guideline

Refer to [Implementation Guideline](.github/IMPLEMENTATION.md)

Refer to [Contributors Guideline](.github/CONTRIBUTING.md)

## License

[BSD 2-Clause License](LICENSE)

## Contact

For questions or contributions, please contact blshen@fsu.edu.
13 changes: 13 additions & 0 deletions config/global_cfg.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
target_model: gat
target_hidden_dims: [224, 128]
dataset: Cora
train_setting: ""
test_setting: 1
embedding_dim: 128
train_process: "train"
test_process: "test"
n_run: 3

train_save_root: ../temp_results/diff/model_states/
test_save_root: ../temp_results/diff/model_states/
res_path: ../temp_results/diff/res/
4 changes: 4 additions & 0 deletions config/test_setting1.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
model_arches: ["gat", "gcn", "sage"]
layer_dims: [96, 160, 224, 288, 352]
num_hidden_layers: [2]
num_model_per_arch: 10
4 changes: 4 additions & 0 deletions config/test_setting2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
model_arches: ["gat", "gcn", "sage"]
layer_dims: [128, 192, 256, 320, 384]
num_hidden_layers: [1, 3]
num_model_per_arch: 10
4 changes: 4 additions & 0 deletions config/test_setting3.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
model_arches: ["gin", 'sgc']
layer_dims: [96, 160, 224, 288, 352]
num_hidden_layers: [2]
num_model_per_arch: 15
4 changes: 4 additions & 0 deletions config/test_setting4.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
model_arches: ["gin", 'sgc']
layer_dims: [128, 192, 256, 320, 384]
num_hidden_layers: [1, 3]
num_model_per_arch: 15
4 changes: 4 additions & 0 deletions config/train_setting.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
model_arches: ["gat", "gcn", "sage"]
layer_dims: [96, 160, 224, 288, 352]
num_hidden_layers: [2]
num_model_per_arch: 20 # per arch
Binary file added data/Cora/raw/ind.cora.allx
Binary file not shown.
Binary file added data/Cora/raw/ind.cora.ally
Binary file not shown.
Binary file added data/Cora/raw/ind.cora.graph
Binary file not shown.
Loading