Chrome Extension
WeChat Mini Program
Use on ChatGLM

HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering

T. Luo,Fangyu Lei, Johan van der Lei,Weihao Liu,Shizhu He, Jing Zhao,Kang Liu

arXiv (Cornell University)(2023)

Cited 0|Views11
No score
Abstract
Answering numerical questions over hybrid contents from the given tables and text(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs) have gained significant attention in the NLP community. With the emergence of large language models, In-Context Learning and Chain-of-Thought prompting have become two particularly popular research topics in this field. In this paper, we introduce a new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt the model to develop the ability of retrieval thinking when dealing with hybrid data. Our method achieves superior performance compared to the fully-supervised SOTA on the MultiHiertt dataset in the few-shot setting.
More
Translated text
Key words
hybrid question answering,hybrid prompt strategy,retrieval,table-text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined