LongWanjuan: Towards Systematic Measurement for Long Text Quality
CoRR(2024)
摘要
The quality of training data are crucial for enhancing the long-text
capabilities of foundation models. Despite existing efforts to refine data
quality through heuristic rules and evaluations based on data diversity and
difficulty, there's a lack of systematic approaches specifically tailored for
assessing long texts. Addressing this gap, our work systematically measures the
quality of long texts by evaluating three fundamental linguistic dimensions:
coherence, cohesion, and complexity. Drawing inspiration from the
aforementioned three dimensions, we introduce a suite of metrics designed to
evaluate the quality of long texts, encompassing both statistical and
pre-trained language model-based ones. Leveraging these metrics, we present
LongWanjuan, a bilingual dataset specifically tailored to enhance the training
of language models for long-text tasks with over 160B tokens. In LongWanjuan,
we categorize long texts into holistic, aggregated, and chaotic types, enabling
a detailed analysis of long-text quality. Furthermore, we devise a data mixture
recipe that strategically balances different types of long texts within
LongWanjuan, leading to significant improvements in model performance on
long-text tasks. The code and dataset are available at
https://github.com/OpenLMLab/LongWanjuan.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要