- Overview
- Installation
- Quick Start
- How It Works
- API Reference
- Real-World Usage Pattern
- Requirements
- Testing
- Contributing
- License
When using Large Language Models (LLMs) as judges to evaluate other models or systems, the judge's own biases and errors can significantly impact the reliability of the evaluation. judgy provides tools to estimate the true success rate of your system by correcting for LLM judge bias, and bootstrapping to generate a confidence interval.
pip install judgygit clone https://github.com/ai-evals-course/judgy.git
cd judgy
pip install -e .[dev,plotting]import numpy as np
from judgy import estimate_success_rate
# Your data: 1 = Pass, 0 = Fail
test_labels = [1, 1, 0, 0, 1, 0, 1, 0] # Human labels on test set
test_preds = [1, 0, 0, 1, 1, 0, 1, 0] # LLM judge predictions on test set
unlabeled_preds = [1, 1, 0, 1, 0, 1, 0, 1] # LLM judge predictions on unlabeled data
# Estimate true pass rate with 95% confidence interval
theta_hat, lower_bound, upper_bound = estimate_success_rate(
test_labels=test_labels,
test_preds=test_preds,
unlabeled_preds=unlabeled_preds
)
print(f"Estimated true pass rate: {theta_hat:.3f}")
print(f"95% Confidence interval: [{lower_bound:.3f}, {upper_bound:.3f}]")The library implements a bias correction method based on the following steps:
- Judge Accuracy Estimation: Calculate the LLM judge's True Positive Rate (TPR) and True Negative Rate (TNR) using labeled test data
- Correction: Apply the correction formula to account for judge bias:
where
θ̂ = (p_obs + TNR - 1) / (TPR + TNR - 1)p_obsis the observed pass rate from the judge - Bootstrap Confidence Intervals: Use bootstrap resampling to quantify uncertainty in the estimate
estimate_success_rate(test_labels, test_preds, unlabeled_preds, bootstrap_iterations=20000, confidence_level=0.95)
Estimate true pass rate with bias correction and confidence intervals.
Parameters:
test_labels: Array-like of 0/1 values (human labels on test set)test_preds: Array-like of 0/1 values (judge predictions on test set)unlabeled_preds: Array-like of 0/1 values (judge predictions on unlabeled data)bootstrap_iterations: Number of bootstrap iterations (default: 20000)confidence_level: Confidence level between 0 and 1 (default: 0.95)
Returns:
theta_hat: Point estimate of true pass ratelower_bound: Lower bound of confidence intervalupper_bound: Upper bound of confidence interval
from judgy import estimate_success_rate
# Step 1: Collect human labels on a test set
test_labels = [...] # Human evaluation: 1 = good, 0 = bad
# Step 2: Get LLM judge predictions on the same test set
test_preds = [...] # LLM judge predictions: 1 = good, 0 = bad
# Step 3: Get LLM judge predictions on your unlabeled data
unlabeled_preds = [...] # LLM judge predictions on data you want to evaluate
# Step 4: Estimate the true pass rate
true_rate, lower, upper = estimate_success_rate(test_labels, test_preds, unlabeled_preds)
print(f"Your system's estimated true success rate: {true_rate:.1%}")
print(f"95% confidence interval: [{lower:.1%}, {upper:.1%}]")- Python 3.8+
- numpy >= 1.20.0
Run the test suite:
pytest tests/Run with coverage:
pytest tests/ --cov=judgy --cov-report=htmlContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- The Rogan-Gladen correction method for bias correction in diagnostic tests
- Bootstrap methodology for confidence interval estimation
- The Python scientific computing ecosystem (NumPy, matplotlib)
If you encounter any issues or have questions, please:
- Check the documentation
- Search existing issues
- Create a new issue with a minimal reproducible example
Note: This library assumes that your LLM judge performs better than random chance (TPR + TNR > 1). If your judge's accuracy is too low, the correction method may not be applicable.
