This is the github repository for IJCNN 2025 eassy "CRAVE: A Conflicting Reasoning Approach for Explainable Claim Verification Using LLMs".
We propose CRAVE (Conflicting Reasoning Approach for explainable claim VErification), a novel framework that leverages conflicting rationales generated by LLMs for explainable and accurate claim verification.CRAVE comprises three modules: (1) Ambiguity-Elimination Enhanced Evidence Retrieval, which refines entity-based searches to collect relevant evidence from sources like Wikipedia; (2) Conflicting Perspective Reasoning and Preliminary Judgment, where LLMs reason across four dimensions—direct evidence, semantic relationships, linguistic cues, and logical inference—to produce preliminary judgments; and (3) a Small Language Model (SLM)-based Judge, fine-tuned to assess the confidence of conflicting rationales and deliver a final authenticity verdict.Experiments on two public claim verification datasets show that CRAVE significantly outperforms state-of-the-art baselines, with enhanced evidence retrieval and more interpretable predictions.
To prepare the claims and corpus for HOVER, run the following command:
bash prepare_hover_data.sh
To prepare the claims and corpus for Feverous, run the following command:
bash prepare_feverous_data.sh
The claims and the indexed corpus will be saved in ./[DATASET]/claims and ./[DATASET]/corpus folder. The corpus will be used for ambiguity elimination.
If you cannot access Wikipedia directly, you can run local Wikipedia Server.
To run local Wikipedia server, download wikipedia_en_all_maxi_2024-01.zim from https://www.mirrorservice.org/sites/download.kiwix.org/zim/wikipedia/
Run the following command:
python ./test_evidence/server.py
To generate ambiguity elimination plan, run the following commands :
python ./test_evidence/decompose.py
To continue Ambiguity Elimination and Entity Retrieval, run the following command:
python ./test_evidence/ambiguity_elimination.py
To continue Evidence Retrieval and Selection, run the following command:
python ./test_evidence/evidence.py
You can view the output of each procedure in ./[DATASET_NAME]. Also, you can appoint specific args according to args.py or change the prompts in prompt.py.
Run the following command for reasoning through LLM:
python ./test_rationale/main.py
You can also directly access the accuracy of LLMs output by running
python ./test_rationale/accuracy.py
You can train your own Small Language Model using the files under ./SLM
bash ./SLM/ft.sh
You can evaluate the pretrained SLM by running the following commands:
python ./SLM/test.py
The result of different training methods for using gpt-3.5-turbo as LLM:

If you are interested in our pretrained model checkpoint or encouter any problem, please either directly contact us or leave an issue in the github repo.
