This repository was set up to host a Jupyter Notebook for a final assignment of a Data Science for Social Good course at the University of Salzburg. The aim was to investigate various potenial biases in machine learning, to understand who they affect and how they can have impacts, and what can be done to address them. The final notebook showing my assignment is available here:
The tools used in this analysis are from the 'Aequitas Bias and Fairness Audit Toolkit'
This course is part of the MSc Applied Geoinformatics at the Univesity of Salzburg
