Exploiting Positional Bias for Query-Agnostic Generative Content in Search
arxiv(2024)
摘要
In recent years, neural ranking models (NRMs) have been shown to
substantially outperform their lexical counterparts in text retrieval. In
traditional search pipelines, a combination of features leads to well-defined
behaviour. However, as neural approaches become increasingly prevalent as the
final scoring component of engines or as standalone systems, their robustness
to malicious text and, more generally, semantic perturbation needs to be better
understood. We posit that the transformer attention mechanism can induce
exploitable defects through positional bias in search models, leading to an
attack that could generalise beyond a single query or topic. We demonstrate
such defects by showing that non-relevant text–such as promotional
content–can be easily injected into a document without adversely affecting its
position in search results. Unlike previous gradient-based attacks, we
demonstrate these biases in a query-agnostic fashion. In doing so, without the
knowledge of topicality, we can still reduce the negative effects of
non-relevant content injection by controlling injection position. Our
experiments are conducted with simulated on-topic promotional text
automatically generated by prompting LLMs with topical context from target
documents. We find that contextualisation of a non-relevant text further
reduces negative effects whilst likely circumventing existing content filtering
mechanisms. In contrast, lexical models are found to be more resilient to such
content injection attacks. We then investigate a simple yet effective
compensation for the weaknesses of the NRMs in search, validating our
hypotheses regarding transformer bias.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要