Length-Aware Multi-Kernel Transformer for Long Document Classification
CoRR(2024)
Abstract
Lengthy documents pose a unique challenge to neural language models due to
substantial memory consumption. While existing state-of-the-art (SOTA) models
segment long texts into equal-length snippets (e.g., 128 tokens per snippet) or
deploy sparse attention networks, these methods have new challenges of context
fragmentation and generalizability due to sentence boundaries and varying text
lengths. For example, our empirical analysis has shown that SOTA models
consistently overfit one set of lengthy documents (e.g., 2000 tokens) while
performing worse on texts with other lengths (e.g., 1000 or 4000). In this
study, we propose a Length-Aware Multi-Kernel Transformer (LAMKIT) to address
the new challenges for the long document classification. LAMKIT encodes lengthy
documents by diverse transformer-based kernels for bridging context boundaries
and vectorizes text length by the kernels to promote model robustness over
varying document lengths. Experiments on five standard benchmarks from health
and law domains show LAMKIT outperforms SOTA models up to an absolute 10.9
improvement. We conduct extensive ablation analyses to examine model robustness
and effectiveness over varying document lengths.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined