Abstract
Prior work has shown that presupposition in generated questions can introduce unverified assumptions, leading to inconsistencies in claim verification. Additionally, prompt sensitivity remains a significant challenge for large language models (LLMs), resulting in performance variance as high as 3-6%. While recent advancements have reduced this gap, our study demonstrates that prompt sensitivity remains a persistent issue. To address this, we propose a structured and robust claim verification framework that reasons through presupposition-free, decomposed questions. Extensive experiments across multiple prompts, datasets, and LLMs reveal that even state-of-the-art models remain susceptible to prompt variance and presupposition. Our method consistently mitigates these issues, achieving up to a 2-5% improvement.
 
              Research Results
 
                Gains are greatest on Complex Claims
Our method shows the highest improvements on complex claims, with no significant degradation on simple claims.
 
                Questions don't need direct answers
We find that questions don't need explict answers, rather they can be used to guide the reasoning process.
 
                Prompt Sensitivity Mitigation
Our method reduces prompt sensitivity variance by up to 2-5%, providing more stable performance.
 
                Atomic Claim Coverage
Our method covers ~90% of the atomic claims in the dataset, demonstrating its effectiveness in verifying complex claims.
BibTeX
@article{dipta2025depresupposerobustlyverifyingclaims,
      title={If We May De-Presuppose: Robustly Verifying Claims through Presupposition-Free Question Decomposition}, 
      author={Shubhashis Roy Dipta and Francis Ferraro},
      year={2025},
      eprint={2508.16838},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.16838}, 
}