Using Large Language Model for End-to-End Chinese ASR and NER
CoRR(2024)
摘要
Mapping speech tokens to the same feature space as text tokens has become the
paradigm for the integration of speech modality into decoder-only large
language models (LLMs). An alternative approach is to use an encoder-decoder
architecture that incorporates speech features through cross-attention. This
approach, however, has received less attention in the literature. In this work,
we connect the Whisper encoder with ChatGLM3 and provide in-depth comparisons
of these two approaches using Chinese automatic speech recognition (ASR) and
name entity recognition (NER) tasks. We evaluate them not only by conventional
metrics like the F1 score but also by a novel fine-grained taxonomy of ASR-NER
errors. Our experiments reveal that encoder-decoder architecture outperforms
decoder-only architecture with a short context, while decoder-only architecture
benefits from a long context as it fully exploits all layers of the LLM. By
using LLM, we significantly reduced the entity omission errors and improved the
entity ASR accuracy compared to the Conformer baseline. Additionally, we
obtained a state-of-the-art (SOTA) F1 score of 0.805 on the AISHELL-NER test
set by using chain-of-thought (CoT) NER which first infers long-form ASR
transcriptions and then predicts NER labels.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要