OpenFactCheck: A Unified Framework for Factuality Evaluation of LLMs
CoRR(2024)
Abstract
The increased use of large language models (LLMs) across a variety of
real-world applications calls for mechanisms to verify the factual accuracy of
their outputs. Difficulties lie in assessing the factuality of free-form
responses in open domains. Also, different papers use disparate evaluation
benchmarks and measurements, which renders them hard to compare and hampers
future progress. To mitigate these issues, we propose OpenFactCheck, a unified
factuality evaluation framework for LLMs. OpenFactCheck consists of three
modules: (i) CUSTCHECKER allows users to easily customize an automatic
fact-checker and verify the factual correctness of documents and claims, (ii)
LLMEVAL, a unified evaluation framework assesses LLM's factuality ability from
various perspectives fairly, and (iii) CHECKEREVAL is an extensible solution
for gauging the reliability of automatic fact-checkers' verification results
using human-annotated datasets. OpenFactCheck is publicly released at
https://github.com/yuxiaw/OpenFactCheck.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined