Re-Representation in Sentential Relation Extraction with Sequence Routing Algorithm
| Model | Tacred | Tacredrev | Retacred | Conll04 | Wikidata |
|---|---|---|---|---|---|
| Baseline Models | |||||
| Entity Marker | 74.6 | 83.2 | 91.1 | – | – |
| Curriculum Learning | 75.2 | – | 91.4 | – | – |
| – | – | 90.4 | 76.5 | – | |
| KGpool | – | – | – | – | 88.6 |
| 86.6 | 88.3 | 73.3 | – | – | |
| Our Models | |||||
| Ours bert H₃ | 84.8 (47.8) | 85.3 (49.7) | 89.4 (74.0) | 99.7 (99.8) | 84.5 (32.0) |
| Ours bert H₁,H₂,H₃,Decoder | 87.4(48.3) | 88.7 (50.9) | 88.7 (68.5) | 100. (100.) | – |
| Ours RoBERTa H₃ | 87.1 (61.1) | 88.8 (64.2) | 92.2 (80.1) | 100. (100.) | 85.6 (32.9) |
If you use our work kindly consider citing the paper Re-Representation
@inproceedings{bahrami-yahyapour-2025-representation,
title = "Re-Representation in Sentential Relation Extraction with Sequence Routing Algorithm",
author = "Bahrami, Ramazan and
Yahyapour, Ramin",
editor = "Abbas, Mourad and
Yousef, Tariq and
Galke, Lukas",
booktitle = "Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP-2025)",
month = aug,
year = "2025",
address = "Southern Denmark University, Odense, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.icnlsp-1.31/",
pages = "315--327"
}
Download the datasets and place it in their respective folders at unprocessed_data folder. The datasets can be downloaded from the below link
- Wikidata
- conll04
- tacred Tacred Extensions: obtain retacred and tacredrev/tacrev by follwing the instruction in the provided links.
- retacred and the instrcution.
- tacrev and the instrcution.
pip install git+https://github.com/glassroom/heinsen_routing
pip install transformers
pip3 install torch torchvision
pip installl tqdm
The parameters are in the config file. Download the datasets, embeddings etc. and place in the expected locations.
preprocess the data: \
python main.py --task preprocess --data dataname --tokenizer_name modelname
dataname= conll, wikidata, tacred, retacred
tokenizer_name= bert-large-uncased, roberta-large\
Example: To preprocess conll with roberta-large\
python main.py --task preprocess --data wikidata --tokenizer_name roberta-large
to train for an experiment
experimentNo=one, two, three, four, five according to the paper
python main.py --task train --data wikidata --experiment experimentNo --model_to_train wordanalogy_re_model --tokenizer_name roberta-large
to eval \
python main.py --task eval --data wikidata --experiment experimentNo --model_to_train wordanalogy_re_model --tokenizer_name roberta-large