Detecting hallucinations in LLM-generated answers using cross-checking consistency across models. Implements and extends the SAC3 method to smaller open-source models.
-
Updated
Jun 19, 2025 - Jupyter Notebook
Detecting hallucinations in LLM-generated answers using cross-checking consistency across models. Implements and extends the SAC3 method to smaller open-source models.
A lightweight comparative analysis of 3 modern Black-Box Hallucination Detection methods for language models, including SAC3, SelfCheckGPT, and Semantic Entropy.
Add a description, image, and links to the sac3 topic page so that developers can more easily learn about it.
To associate your repository with the sac3 topic, visit your repo's landing page and select "manage topics."