Python implementation of a simple artificial neural network (ANN)
The information in the network moves only in one direction, from the input layer through the hidden layer and to the output layer. Hence, it is a Feedforward neural network. Each neuron in the current layer is connected to each neuron in the subsequent layer, it is a fully connected net.
The function feed_forward() gives the output of the neural net for a given input Xand weights weights. weights must be a list of the matrices (numpy arrays) sigmoid() to each value of the product matrix.
This process is also called forward-probagation.
Backprogation is used to compute the gradient of the loss function over the space of all possible weigths.
This process is implemented in the function backprop() for a dense neural net, where the output layer consists of 1 neuron and the loss function is the log loss. The neural network then becomes a binary classifier.
In general, the goal of any supervised learning algorithm is to find a functions that maps the set of inputs artificial_neural_network() takes as input the training data epochs, batch_size and the learning rate LR_H for the hidden and LR_O for the outer layer.
The weights feed_forward().
The loss function log_loss() is used to calculate the discrepency between the predicted and the actual outputs backprop().
The first batch_size. Then the next epochs.
For each epoch the current training accuraccy and the current log loss of the output is tracked in the lists LOSS_VEC and ACC_VEC. Those metrics can then be plotted to evaluate the learning of the neural net.
- Allow the use of different activation functions, like ReLu or tanh
- Allow a different kind of output layer
- Implement different kinds of loss functions