This repository contains the official implementation for the article GraphXAIN: Narratives to Explain Graph Neural Networks (Cedro & Martens, 2024). Our method integrates Graph Neural Networks (GNNs), graph explainers, and Large Language Models (LLMs) to generate GraphXAINs, explainable AI (XAI) narratives that enhance the interpretability of GNN predictions.
Graph Neural Networks (GNNs) are a powerful technique for machine learning on graph-structured data, yet they pose challenges in interpretability. Existing GNN explanation methods usually yield technical outputs, such as subgraphs and feature importance scores, that are difficult for non-data scientists to understand and thereby violate the purpose of explanations. Motivated by recent Explainable AI (XAI) research, we propose GraphXAIN, a method that generates natural language narratives explaining GNN predictions. GraphXAIN is a model- and explainer-agnostic method that uses Large Language Models (LLMs) to translate explanatory subgraphs and feature importance scores into coherent, story-like explanations of GNN decision-making processes. Evaluations on real-world datasets demonstrate GraphXAIN's ability to improve graph explanations. A survey of machine learning researchers and practitioners reveals that GraphXAIN enhances four explainability dimensions: understandability, satisfaction, convincingness, and suitability for communicating model predictions. When combined with another graph explainer method, GraphXAIN further improves trustworthiness, insightfulness, confidence, and usability. Notably, 95% of participants found GraphXAIN to be a valuable addition to the GNN explanation method. By incorporating natural language narratives, our approach serves both graph practitioners and non-expert users by providing clearer and more effective explanations.
To generate GraphXAINs for a given GNN model:
- Prepare Data: Ensure you have ready-to-use graph data or adjacency matrix with features matrix ready for the input graph.
- Train GNN Model: Train your GNN model.
- Run the Graph Explainer: Follow the
notebooks/GraphXAIN_tutorial.ipynbnotebook to extract subgraphs and feature importance values. - Generate GraphXAINs: Follow the
notebooks/GraphXAIN_tutorial.ipynbnotebook to generate GraphXAINs based on the extracted data.
To successfully run the code in this repository, you must modify certain functions in PyTorch Geometric. We have provided a file, utils/pyg_modifications.py, which contains all the updated functions needed.
-
PyTorch Version Make sure you have installed these specific versions:
torch==2.2.1torch-geometric==2.6.1
-
Open our
utils/pyg_modifications.py- This file is in the root of this repo (or wherever you placed it).
- Inside, you’ll find code blocks for each function that needs patching:
Explanation.visualize_graphHeteroExplanation.visualize_feature_importance_visualize_score_visualize_graph_via_graphviz
-
Copy each code block into PyTorch Geometric
- For example, copy the
visualize_graphblock into:within thetorch_geometric/explain/explanation.pyExplanationclass. - Similarly, copy the
visualize_feature_importanceblock into theHeteroExplanationclass in the same file. - Copy the
_visualize_scorefunction and replace the original function. - Lastly, copy the
_visualize_graph_via_graphvizblock into:replacing the existing function if present.torch_geometric/visualization/graph.py
- For example, copy the
Note: Please ensure you run notebooks from the root directory of this repository. This guarantees that the file paths and environment references will resolve correctly.
datasets/: Contains sample datasets used in the paper.notebooks/: Jupyter notebooks to generate GraphXAINs.explanations/: Contains outputs from graph explainer.utils/: Containsmodel.pywith GNN model,utils.pywith utility functions andpyg_modifications.pywith necessary changes in PyTorch Geometrics package.images/: Contains images used in publication.survey/: Contains survey results conducted to do human evaluation of GraphXAINs.
If you find this work useful, please cite our paper:
@article{cedro2024graphxain,
title={GraphXAIN: Narratives to Explain Graph Neural Networks},
author={Cedro, Mateusz and Martens, David},
journal={arXiv preprint arXiv:2411.02540},
year={2024}
}This project is licensed under the MIT License.
For questions or collaborations, feel free to contact:
- Mateusz Cedro: [mateusz.cedro@uantwerpen.be]
- Affiliation: University of Antwerp, Belgium
We appreciate any feedback or contributions to the project!

