You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Probed gender-science bias in transformer language models by implementing WEAT on English and Urdu embeddings. Conducted comparative evaluation across BERT, RoBERTa, DistilBERT, and XLM-RoBERTa, revealing model and language-dependent bias patterns and providing reproducible benchmarks.