- Copy
sskip.100.vectorsto PMB2/gold - Copy
train_input.txtRawto PMB2/gold - Copy
dev_input.txtRawto PMB2/gold - Copy
test_input.txtRawto PMB2/gold
cd PMB2/gold
python replaceCard.py
python replaceCard.py -src dev_input.txtRaw -trg dev_input.txt
python replaceCard.py -src test_input.txtRaw -trg test_input.txt
- Generate the global condition tag file
tag.txtfrom the training data
python globalRel.py
cd seq2tree(orcd tree2treeorcd tree2treePosorcd seqtree2tree)
python main.py
-s if the model will be saved? default=False
-t if only test the saved model? default=False
-r if reload the model when continuing to train? default=False
-mp if delete the Universal POS tags as features? default=False
-md if delete the Universal Dependency tags as features? default=False
-mw if delete the word embeddings as features? default=False
-model the model name default='output_model/1.model'
- use Ctrl+c to stop the training
The devlopment and test results after each epoch are listed in output_dev and output_tst respectively. Firstly, choose the epoch which performs best on the devlopment set. Then get the test results on the test set.
Evaluate on the development set.
cd seq2tree/output_dev(orcd tree2tree/output_devorcd tree2treePos/output_devorcd seqtree2tree/output_dev)
- Transform the formation of the result Discourse Representation Structure to lines which fit to Counter, and generate the rough scores by comparing each line of the two files. Choose the epoch (file) i with the highest f-score as the i of the next step.
python convertAndRoughTest.py
-r1 startpoint of the tested file range (epoch number) default=1
-r2 endpoint of the tested file range (epoch number) default=2
-src the gold development file default='dev.gold'
-trg transformation results of the gold development file default='dev.test'
-gold the gold development file after transformation default='dev.test'
python ../../DRS_parsing/counter.py -f1 i.test -f2 dev.test -pr -prin -ms (> i.results)
Evaluate on the test set. (Roughly the same as the evaluation on the development set. For example, cd seq2tree/output_tst)
jupyter notebook
- Open analysis_dmatch.ipynb
- Change the file name in the 3rd and 4th cell an run the first 4 cells
- Copy
wiki.multi.de.vec.txtto PMB_multi - Copy
wiki.multi.it.vec.txtto PMB_multi - Copy
wiki.multi.nl.vec.txtto PMB_multi - Copy
wiki.multi.en.vec.txtto PMB_multi - Copy
de.inputto PMB_multi/PMB_de_v2/PMB/gold - Copy
nl.inputto PMB_multi/PMB_nl_v2/PMB/gold - Copy
it.inputto PMB_multi/PMB_it_v2/PMB/gold
cd PMB2/gold
python replaceCard.py
python replaceCard.py -src dev_input.txtRaw -trg dev_input.txt
- Generate the global condition tag file
tag.txtfrom the training data
python globalRel.py
cd PMB_multi/PMB_de_v2/PMB/gold
python replace1.py
python replace2.py
Replace step 4 , 5 and 6 on nl and it.
cd seq2tree_multi(orcd tree2tree_multiorcd tree2treePos_multiorcd seqtree2tree_multi)
python main.py
-s if the model will be saved? default=False
-t if only test the saved model? default=False
-r if reload the model when continuing to train? default=False
-mp if delete the Universal POS tags as features? default=False
-md if delete the Universal Dependency tags as features? default=False
-mw if delete the cross-lingual word embeddings as features? default=False
-model the model name default='output_model/1.model'
- use Ctrl+c to stop the training
The devlopment and test results after each epoch are listed in output_dev and output_tst respectively. Firstly, choose the epoch which performs best on the devlopment set. Then get the test results on the test set.
Evaluate on the development set.
cd seq2tree_multi/output_dev(orcd tree2tree_multi/output_devorcd tree2treePos_multi/output_devorcd seqtree2tree_multi/output_dev)
- Transform the formation of the result Discourse Representation Structure to lines which fit to Counter, and generate the rough scores by comparing each line of the two files. Choose the epoch (file) i with the highest f-score as the i of the next step.
python convertAndRoughTest.py
-r1 startpoint of the tested file range (epoch number) default=1
-r2 endpoint of the tested file range (epoch number) default=2
-src the gold development file default='dev.gold'
-trg transformation results of the gold development file default='dev.test'
-gold the gold development file after transformation default='dev.test'
python ../../DRS_parsing/counter.py -f1 i.test -f2 dev.test -pr -prin -ms (> i.results)
Evaluate on the nl/de/it test set. (Roughly the same as the evaluation on the development set. For example, cd seq2tree_multi/output_it_tst)
jupyter notebook
- Open analysis_dmatch.ipynb
- Change the file name in the 3rd and 4th cell an run the first 4 cells
The code is based on the code in EdinburghNLP/EncDecDRSparsing.