Refining attention for explainable and noise-robust fact-checking with transformers

Bussotti, Jean-Flavien; Papotti, Paolo
EMNLP 2025, 30th Conference on Empirical Methods in Natural Language Processing, 4-9 November 2025, Suzhou, China

In tasks like question answering and fact-checking, models must discern relevant information from extensive corpora in an "open-book" setting. Conventional transformer-based models excel at classifying input data, but (i) often falter due to sensitivity to noise and (ii) lack explainability regarding their decision process. To address these challenges, we introduce ATTUN, a novel transformer architecture designed to enhance model transparency and resilience to noise by refining the attention mechanisms. Our approach involves a dedicated module that directly modifies attention weights, allowing the model to both improve predictions and identify the most relevant sections of input data. We validate our methodology using fact-checking datasets and show promising results in question answering. Experiments show up to a 51% improvement in F1 score over state-of-the-art systems for detecting relevant context, and up to an 18% gain in task accuracy when integrating ATTUN into a model.


Type:
Poster / Demo
City:
Suzhou
Date:
2025-11-04
Department:
Data Science
Eurecom Ref:
8414
Copyright:
Copyright ACL. Personal use of this material is permitted. The definitive version of this paper was published in EMNLP 2025, 30th Conference on Empirical Methods in Natural Language Processing, 4-9 November 2025, Suzhou, China and is available at :

PERMALINK : https://www.eurecom.fr/publication/8414