Auditing LLM Bias (BERT/XLM-R) for EU AI Act Compliance. Measuring Statistical Parity and Algorithmic Fairness in Multilingual settings
-
Updated
Jan 10, 2026 - Jupyter Notebook
Auditing LLM Bias (BERT/XLM-R) for EU AI Act Compliance. Measuring Statistical Parity and Algorithmic Fairness in Multilingual settings
Probed gender-science bias in transformer language models by implementing WEAT on English and Urdu embeddings. Conducted comparative evaluation across BERT, RoBERTa, DistilBERT, and XLM-RoBERTa, revealing model and language-dependent bias patterns and providing reproducible benchmarks.
🛡️ Framework de défense contre le Vandalisme Cognitif et l'empoisonnement de données dans les LLMs. Analyse quantitative du révisionnisme historique, métriques de dérive morale et implémentation de preuves de réalité par hachage temporel (C2PA/Blockchain)
Add a description, image, and links to the nlp-bias topic page so that developers can more easily learn about it.
To associate your repository with the nlp-bias topic, visit your repo's landing page and select "manage topics."