We release the code of the proposed methods from our following papers here, to help people who are interested in our work reproduce those results.
- [P1] Does machine bring in extra bias in learning? Approximating fairness in models promptly. [arXiv]
- [P2] Approximating discrimination within models when faced with several non-binary sensitive attributes. [arXiv]
- Does machine bring in extra bias in learning? Approximating discrimination within models quickly. In NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning (Non-archival poster, OpenReview).
To reproduce our empirical results, please refer to the instructions and use the released experimental data.
We proposed a fairness measure named harmonic fairness measure via manifolds (HFM) with three optional versions, which deals with a fine-grained discrimination evaluation for one or more sensitive attributes (sen-att-s). HFM relies on the Euclidean Hausdorff distance, of which the direct computation is rather heavy. To accelerate the distance computation, we further proposed a few approximation algorithms for efficient bias evaluation.
In other words, we provide the evaluation of extra discrimination for three cases: 1) only one bi-valued sensitive attribute (sen-att); 2) one multi-valued sen-att; and 3) more than one sen-att. Among them, case 1 comes from [P1], and two others come from [P2]. Here is a short tutorial covering all the aforementioned cases and methods.
You're welcome to adjust the parameters (except priv_val, which depends on the data you use) as needed or to explore potential improvements. Please note that this version may contain typos or errors; If you find any, feel free to contact us or raise an issue please.
If you find this repository useful, you may consider to cite our work.
@article{bian2024does,
author = {Bian, Yijun and Luo, Yujie},
title = {Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly},
journal = {arXiv preprint arXiv:2405.09251},
year = {2024},
}
@article{bian2024approximating,
author = {Bian, Yijun and Luo, Yujie and Xu, Ping},
title = {Approximating Discrimination Within Models When Faced With Several Non-Binary Sensitive Attributes},
journal = {arXiv preprint arXiv:2408.06099},
year = {2024},
}ApproxBias is released under the MIT Licence.