This repository contains implementations of core machine learning techniques such as the Perceptron algorithm and Neural Networks using C and C++. The project serves as an educational sandbox for experimenting with these algorithms at a low-level, with a focus on understanding their mathematical principles and practical performance.
- Implementation of Perceptron algorithms
- Custom Neural Network architectures
- Efficient matrix operations (including multiplication and backpropagation)
- Example footage and visualizations
Evolutionary algorithms are inspired by biological evolution processes such as selection, mutation, and crossover. They are used to optimize solutions iteratively, and in this repo may be used to explore optimal weights for neural networks.
Generally, the algorithm initializes a population of candidate solutions, evaluates them, and selects better solutions across generations.
The Perceptron is a fundamental building block for neural networks. It takes several input values, multiplies them by their respective weights, adds a bias, and passes the result through an activation function (usually a step function).
Weights determine the influence of each input feature, and the bias shifts the activation threshold. Training changes these values to minimize classification error.
Example Usage:
You can run basic experiments with perceptrons using the compiled binary. See example code in the repository.
Neural Networks are composed of layers of perceptrons ("neurons"). Inputs are processed through interconnected weights, and outputs are produced through the final layer. Training involves adjusting all weights/biases using techniques like backpropagation and gradient descent.
This algorithm computes derivatives of the error function with respect to each weight, enabling efficient update and learning.
Matrix multiplication is central for propagating data through layers of a neural network.
Optimized matrix operations leverage parallel computation and memory access patterns, dramatically speeding up neural network computations.
Backpropagation relies heavily on matrix operations to calculate the gradients necessary for learning.
To build and run the project:
- Clone the repository:
git clone https://github.com/gecarval/MachineLearning.git
- Move to the project folder:
cd MachineLearning - Compile the project:
make
- Execute the main program:
./machinelearn
| BUTTON | ACTION |
|---|---|
| Perceptron | Press Space |
Esc |
Exit |
T |
Train |
W |
Up |
S |
Down |
A |
Left |
D |
Right |
| -------- | -------- |
| Neural Network | Press Enter |
Esc |
Exit |
LMB |
Red point |
RMB |
Green point |
Ctrl+Z |
Undo point |
D |
Delete all points |
S |
Save Neural Network |
L |
Load Neural Network |
R |
Reset Neural Network |
up |
increase learn rate |
down |
decrease learn rate |