SQUAB: Evaluating LLM's robustness to ambiguous and unanswerable questions in semantic parsing

Papicchio, Simone; Cagliero, Luca; Papotti, Paolo
EMNLP 2025, 30th Conference on Empirical Methods in Natural Language Processing, 4-9 November 2025, Suzhou, China

Large Language Models (LLMs) have demonstrated robust performance in Semantic Parsing (SP) for well-defined queries with unambiguous intent and answerable responses. However, practical user questions frequently deviate from these ideal conditions, challenging the applicability of existing benchmarks. To address this issue, we introduce SQUAB, an automatic dataset generator of Ambiguous and Unanswerable questions. SQUAB generates complex, annotated SP tests using a blend of SQL and LLM capabilities. Results show that SQUAB reduces test generation costs by up to 99% compared to human-based solutions while aligning with real-world question patterns. Furthermore, these tests challenge LLM performance while revealing disparities between public and proprietary datasets. This highlights the need for a dynamic, automatic dataset generator as SQUAB. The code is designed for user extension to accommodate new ambiguous and unanswerable patterns.


Type:
Poster / Demo
City:
Suzhou
Date:
2025-11-04
Department:
Data Science
Eurecom Ref:
8413
Copyright:
Copyright ACL. Personal use of this material is permitted. The definitive version of this paper was published in EMNLP 2025, 30th Conference on Empirical Methods in Natural Language Processing, 4-9 November 2025, Suzhou, China and is available at :

PERMALINK : https://www.eurecom.fr/publication/8413