PiCO: Peer Review in LLMs based on the Consistency Optimization
arxiv(2024)
Abstract
Existing large language models (LLMs) evaluation methods typically focus on
testing the performance on some closed-environment and domain-specific
benchmarks with human annotations. In this paper, we explore a novel
unsupervised evaluation direction, utilizing peer-review mechanisms to measure
LLMs automatically. In this setting, both open-source and closed-source LLMs
lie in the same environment, capable of answering unlabeled questions and
evaluating each other, where each LLM's response score is jointly determined by
other anonymous ones. To obtain the ability hierarchy among these models, we
assign each LLM a learnable capability parameter to adjust the final ranking.
We formalize it as a constrained optimization problem, intending to maximize
the consistency of each LLM's capabilities and scores. The key assumption
behind is that high-level LLM can evaluate others' answers more accurately than
low-level ones, while higher-level LLM can also achieve higher response scores.
Moreover, we propose three metrics called PEN, CIN, and LIS to evaluate the gap
in aligning human rankings. We perform experiments on multiple datasets with
these metrics, validating the effectiveness of the proposed approach.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined