A model for processing the MCScript data set provided by the SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge.
Different variations of Word Embeddings can be integrated into the model. Please make sure they are in the appropriate directory: src/test/resources/embeddings/
For the experiment, several variations of pre-trained Word Embeddings were used. The used files can be obtained here: GloVe Word2Vec fastText
Some of the used methods are based on Google Web 1T 5-gram data set. Please ensure the files are stored in the directory provided for that purpose: src/test/resources/Web1t/
The corresponding files can be found here: Google Web 1T
The model uses ND4j as computing library. This library uses BLAS as a backend for computations. So please ensure all prerequisites for this are met.