Cross-Lingual Transfer Learning for Low-Resource Speech Translation
CoRR(2023)
Abstract
The paper presents a novel three-step transfer learning framework for
enhancing cross-lingual transfer from high- to low-resource languages in the
downstream application of Automatic Speech Translation. The approach integrates
a semantic knowledge-distillation step into the existing two-step cross-lingual
transfer learning framework XLS-R. This extra step aims to encode semantic
knowledge in the multilingual speech encoder pre-trained via Self-Supervised
Learning using unlabeled speech. Our proposed three-step cross-lingual transfer
learning framework addresses the large cross-lingual transfer gap (TRFGap)
observed in the XLS-R framework between high-resource and low-resource
languages. We validate our proposal through extensive experiments and
comparisons on the CoVoST-2 benchmark, showing significant improvements in
translation performance, especially for low-resource languages, and a notable
reduction in the TRFGap.
MoreTranslated text
Key words
transfer learning,automatic speech
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined