diff --git a/CONTRIBUTORS.yaml b/CONTRIBUTORS.yaml index c99d6eb..ddca3e3 100644 --- a/CONTRIBUTORS.yaml +++ b/CONTRIBUTORS.yaml @@ -6,4 +6,6 @@ signees: - github_username: "danielm322" date_signed: "2026-02-24" - github_username: "dm274516" - date_signed: "2026-02-24" \ No newline at end of file + date_signed: "2026-02-24" + - github_username: "FabioArnez" + date_signed: "2026-02-26" diff --git a/README.md b/README.md index 1b41540..f71daa9 100644 --- a/README.md +++ b/README.md @@ -50,7 +50,7 @@ ### Prerequisites - Python 3.9 or higher -- CUDA-capable GPU (recommended for computer vision tasks) +- CUDA-capable GPU (recommended for computer vision tasks and LLMs) ### Using pip @@ -131,7 +131,7 @@ inference_module = LaRExInference( prediction, confidence_score = inference_module.get_score(test_image, layer_hook=hooked_layer) ``` -### LLM Uncertainty Estimation (White-box methods) +### LLM Uncertainty Estimation (White-box methods) for Hallucination Detection ```python from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig @@ -481,20 +481,11 @@ print(f"Uncertainty Scores: {scores}") - **GPU**: Required for efficient inference on object detection and segmentation - **Memory**: Varies by model size (8GB+ GPU memory recommended) ---- - -## References - -### Publications - -- [Latent representation entropy density for distribution shift detection](https://hal.science/hal-04674980v1/file/417_latent_representation_entropy_.pdf) - - --- ## Contributing -Contributions are welcome! Please feel free to submit issues or pull requests. +Contributions are welcome! Please feel free to submit issues or pull requests by following the [Contribution Guidelines](CONTRIBUTING.md). ## License @@ -507,6 +498,12 @@ See [LICENSE.txt](LICENSE.txt) for details. --- -## Acknowledgments +## References + +### Publications + +- [Latent representation entropy density for distribution shift detection](https://hal.science/hal-04674980v1/file/417_latent_representation_entropy_.pdf) + -This work was developed as part of the Confiance.ai program, focusing on trustworthy AI systems with emphasis on uncertainty estimation and out-of-distribution detection. \ No newline at end of file +- [The Map of Misbelief: Tracing Intrinsic and Extrinsic Hallucinations Through Attention Patterns](https://ojs.aaai.org/index.php/AAAI-SS/article/view/36884) +--- \ No newline at end of file