Skip to content
/ MoKGR Public

Code for EMNLP 2025' main: Mixture of Length and Pruning Experts for Knowledge Graphs Reasoning

Notifications You must be signed in to change notification settings

EnjunDu/MoKGR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MoKGR architecture

MoKGR

MoKGR (Mixture of Length and Pruning Experts for Knowledge-Graph Reasoning) is a relation-centric framework that personalizes path exploration to deliver state-of-the-art KG reasoning in both transductive and inductive settings.

Key ideas Β· adaptive path-length selection Β· complementary pruning experts Β· fast & memory-efficient message passing

✨ Highlights

  • Adaptive Length Experts – query-aware gating selects the most relevant hop distances and stops early with a Gumbel-Sigmoid binary gate.
  • Complementary Pruning Experts – score-, attention- and semantic-based experts collaboratively retain the most informative entities.
  • Unified Pipeline – handles fully inductive, transductive and cross-domain KGs with a single codebase.
  • Scalable – tested on large KGs (e.g. YAGO3-10) without GPU out-of-memory errors.
  • Plug-and-Play – lightweight implementation; a single modern GPU is sufficient for all benchmarks.

πŸ”§ Installation

cd MoKGR
pip install -r requirements.txt   

πŸš€ Quick Start

1. Transductive Reasoning

cd transductive
# Family (small-scale)
python train.py \
  --data_path data/family --gpu 0 \
  --max_hop 8 --min_hop 2 \
  --num_experts 4 --num_pruning_experts 2 \
  --active_PPR --sampling_percentage 0.85 \
  --active_gate --gate_threshold 0.25
# YAGO3-10 (large-scale)
python train.py \
  --data_path data/YAGO --gpu 0 \
  --max_hop 8 --min_hop 1 \
  --num_experts 6 --num_pruning_experts 2 \
  --active_PPR --sampling_percentage 0.475

πŸ“ Tip: Encounter OOM? Increase --sampling_percentage or disable --active_PPR to reduce subgraph size.

2. Inductive Reasoning

cd inductive
python train.py \
  --data_path ./data/WN18RR_v2 --gpu 0 \
  --max_hop 8 --min_hop 2 --num_experts 5 \
  --active_gate --gate_threshold 0.05

πŸ“Š Reproducing Paper Results

Dataset MRR Hit@1 Hit@10
WN18RR 0.611 0.539 0.702
FB15k-237 0.443 0.368 0.607
YAGO3-10 0.657 0.577 0.758

Full benchmark tables & ablation studies can be found in our paper’s Appendix B–D.

πŸ›  Project Structure

MoKGR/
β”œβ”€ transductive/     # training & evaluation scripts (fixed entity set)
β”œβ”€ inductive/        # inductive split loader + training scripts
β”œβ”€ images/        # logo of MoKGR
└─ requirements.txt

Citation

If you find our paper useful, please cite our paper:

@inproceedings{du2025mokgr,
  title        = {Mixture of Length and Pruning Experts for Knowledge Graphs Reasoning},
  author       = {Du, Enjun and Liu, Siyi and Zhang, Yongqi},
  booktitle    = {Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year         = {2025},
  publisher    = {Association for Computational Linguistics},
  url          = {https://aclanthology.org/2025.emnlp-main.23}
}

About

Code for EMNLP 2025' main: Mixture of Length and Pruning Experts for Knowledge Graphs Reasoning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages