Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models.
CoRR(2023)
摘要
Generalized quantifiers (e.g., few, most) are used to indicate the
proportions predicates are satisfied (for example, some apples are red). One
way to interpret quantifier semantics is to explicitly bind these satisfactions
with percentage scopes (e.g., 30%-40% of apples are red). This approach can be
helpful for tasks like logic formalization and surface-form quantitative
reasoning (Gordon and Schubert, 2010; Roy et al., 2015). However, it remains
unclear if recent foundation models possess this ability, as they lack direct
training signals. To explore this, we introduce QuRe, a crowd-sourced dataset
of human-annotated generalized quantifiers in Wikipedia sentences featuring
percentage-equipped predicates. We explore quantifier comprehension in language
models using PRESQUE, a framework that combines natural language inference and
the Rational Speech Acts framework. Experimental results on the HVD dataset and
QuRe illustrate that PRESQUE, employing pragmatic reasoning, performs 20%
better than a literal reasoning baseline when predicting quantifier percentage
scopes, with no additional training required.
更多查看译文
关键词
semantics,foundation models,reasoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要