This file has my implmentation of a multi-layer perceptron set up to train on data pertaining to the XOR logic gate. Minor alterations to the training data could change this model to work with AND, OR, and other two-input logic gates.
As we discussed in class, the model uses repeated forward and backward propagation to to train. There are a couple of parameters that can be altered to slightly tweak the training, that being epochs, the number of training sets performed, and learning_rate, the proportion by which updates affect the network. Currently, those values are set to:
epochs = 100000
learning_rate = 0.1The MLP has the following structure:
When given the aforementioned values of epochs and learning_rate, the following output is printed in the terminal.
With a higher number of epochs, the predictions would be more accurate, but the model will take longer to train. I tested the same code with
epochs = 1000000and obtained the following result
As can be seen, the results are closer to the expected value, but not by much. Thus, while there is some value to increasing the number of epochs, the returns are diminishing. Overall, the model is very accurate.
To run this module, ensure that you have numpy installed:
pip install numpyRun the xor-mlp.py to train the MLP and evaluate its performance.


