Assessing the Impact of Sequence Length Learning on Classification Tasks for Transformer Encoder Models
arxiv(2022)
摘要
Classification algorithms using Transformer architectures can be affected by
the sequence length learning problem whenever observations from different
classes have a different length distribution. This problem causes models to use
sequence length as a predictive feature instead of relying on important textual
information. Although most public datasets are not affected by this problem,
privately owned corpora for fields such as medicine and insurance may carry
this data bias. The exploitation of this sequence length feature poses
challenges throughout the value chain as these machine learning models can be
used in critical applications. In this paper, we empirically expose this
problem and present approaches to minimize its impacts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要