Skip to content

Does machine bring in extra bias in learning? Approximating discrimination within models quickly

License

Notifications You must be signed in to change notification settings

eustomaqua/ApproxBias

Repository files navigation

ApproxBias

CircleCI Documentation Status Codacy Badge Codacy Badge

We release the code of the proposed methods from our following papers here, to help people who are interested in our work reproduce those results.

  • [P1] Does machine bring in extra bias in learning? Approximating fairness in models promptly. [arXiv]
  • [P2] Approximating discrimination within models when faced with several non-binary sensitive attributes. [arXiv]
  • Does machine bring in extra bias in learning? Approximating discrimination within models quickly. In NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning (Non-archival poster, OpenReview).

To reproduce our empirical results, please refer to the instructions and use the released experimental data.

Getting started

We proposed a fairness measure named harmonic fairness measure via manifolds (HFM) with three optional versions, which deals with a fine-grained discrimination evaluation for one or more sensitive attributes (sen-att-s). HFM relies on the Euclidean Hausdorff distance, of which the direct computation is rather heavy. To accelerate the distance computation, we further proposed a few approximation algorithms for efficient bias evaluation.

In other words, we provide the evaluation of extra discrimination for three cases: 1) only one bi-valued sensitive attribute (sen-att); 2) one multi-valued sen-att; and 3) more than one sen-att. Among them, case 1 comes from [P1], and two others come from [P2]. Here is a short tutorial covering all the aforementioned cases and methods.

You're welcome to adjust the parameters (except priv_val, which depends on the data you use) as needed or to explore potential improvements. Please note that this version may contain typos or errors; If you find any, feel free to contact us or raise an issue please.

Additional information

If you find this repository useful, you may consider to cite our work.

@article{bian2024does,
  author  = {Bian, Yijun and Luo, Yujie},
  title   = {Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly},
  journal = {arXiv preprint arXiv:2405.09251},
  year    = {2024},
}

@article{bian2024approximating,
  author  = {Bian, Yijun and Luo, Yujie and Xu, Ping},
  title   = {Approximating Discrimination Within Models When Faced With Several Non-Binary Sensitive Attributes},
  journal = {arXiv preprint arXiv:2408.06099},
  year    = {2024},
}

Licence

ApproxBias is released under the MIT Licence.

About

Does machine bring in extra bias in learning? Approximating discrimination within models quickly

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages