Skip to content
/ b-score Public

B-score: Detecting biases in large language models using response history. ICML 2025

License

Notifications You must be signed in to change notification settings

anvo25/b-score

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

B-score: Detecting biases in large language models using response history

by An Vo1, Mohammad Reza Taesiri2, Daeyoung Kim1*, Anh Totti Nguyen3*

*Equal advising
1KAIST, 2University of Alberta, 3Auburn University

International Conference on Machine Learning (ICML 2025)

Blog arXiv Dataset License: MIT


📌 Abstract

b-score-figure b-score-figure

Large language models (LLMs) were found to contain strong gender biases (e.g, against female) or numerical biases (e.g, for number 7). We test whether LLMs would be able to output less biased answers when allowed to observe its prior answers to the same question in a multi-turn conversation. For thorough evaluation of LLM biases across different question types, we propose a set of questions spanning 9 topics and across 4 categories: questions that ask for Subjective opinions; Random answers; or objective answers to real-world Easy or Hard questions. Interestingly, LLMs are able to "de-bias" themselves in multi-turn settings in response to Random questions but not other categories. Furthermore, we propose B-score, a novel metric that is effective in detecting biases to Subjective, Random, Easy, and Hard questions. On MMLU, HLE, and CSQA, leveraging B-score substantially improves the verification accuracy of LLM answers (\ie accepting LLM correct answers and rejecting incorrect ones) compared to using verbalized confidence scores or single-turn probabilities alone. Code and data are available at: b-score.github.io


💻 Getting Started

git clone https://github.com/your-org/b-score.git
cd b-score

🚀 Quick Example

Run an example task (e.g. 2-choice gender, random category):

python -m main \
  --task_name 2-choice_gender \
  --category random \
  --model_name gpt-4o-2024-08-06 \
  --n_runs 3 \
  --temperature 0.7

Check results under:

logs/<MODEL>/<TASK>/<CATEGORY>/temp_<T>/

📁 Tasks and Benchmarks

  • ✅ 2-Choice, 4-Choice, and 10-Choice subjective/random/objective questions
  • ✅ MMLU, CommonsenseQA, HLE
  • ✅ BBQ: Ambiguous bias questions

📂 Structure

src/
├─ main.py                # 36-question B-score evaluation
├─ benchmark_main.py      # Benchmark runner
├─ benchmark_utils.py     # Benchmark helpers
├─ utils.py               # Core logic (B-metric, parsing, etc.)
├─ prompts/               # 36 questions

📈 Results

b-score-figure b-score-figure

b-score-figure

b-score-figure


📖 Citation

@inproceedings{vo2025bscore,
  author    = {Vo, An and Taesiri, Mohammad Reza and Kim, Daeyoung and Nguyen, Anh Totti},
  title     = {B-score: Detecting biases in large language models using response history},
  booktitle = {Forty-second International Conference on Machine Learning, {ICML} 2025},
  year      = {2025}
}

About

B-score: Detecting biases in large language models using response history. ICML 2025

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages