TruthEval: A Dataset to Evaluate LLM Truthfulness and Reliability
arxiv(2024)
Abstract
Large Language Model (LLM) evaluation is currently one of the most important
areas of research, with existing benchmarks proving to be insufficient and not
completely representative of LLMs' various capabilities. We present a curated
collection of challenging statements on sensitive topics for LLM benchmarking
called TruthEval. These statements were curated by hand and contain known truth
values. The categories were chosen to distinguish LLMs' abilities from their
stochastic nature. We perform some initial analyses using this dataset and find
several instances of LLMs failing in simple tasks showing their inability to
understand simple questions.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined