Hallucination Detection and Hallucination Mitigation: An Investigation
CoRR(2024)
摘要
Large language models (LLMs), including ChatGPT, Bard, and Llama, have
achieved remarkable successes over the last two years in a range of different
applications. In spite of these successes, there exist concerns that limit the
wide application of LLMs. A key problem is the problem of hallucination.
Hallucination refers to the fact that in addition to correct responses, LLMs
can also generate seemingly correct but factually incorrect responses. This
report aims to present a comprehensive review of the current literature on both
hallucination detection and hallucination mitigation. We hope that this report
can serve as a good reference for both engineers and researchers who are
interested in LLMs and applying them to real world tasks.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要