ANGOFA: Leveraging OFA Embedding Initialization and Synthetic Data for Angolan Language Model
CoRR(2024)
摘要
In recent years, the development of pre-trained language models (PLMs) has
gained momentum, showcasing their capacity to transcend linguistic barriers and
facilitate knowledge transfer across diverse languages. However, this progress
has predominantly bypassed the inclusion of very-low resource languages,
creating a notable void in the multilingual landscape. This paper addresses
this gap by introducing four tailored PLMs specifically finetuned for Angolan
languages, employing a Multilingual Adaptive Fine-tuning (MAFT) approach. In
this paper, we survey the role of informed embedding initialization and
synthetic data in enhancing the performance of MAFT models in downstream tasks.
We improve baseline over SOTA AfroXLMR-base (developed through MAFT) and OFA
(an effective embedding initialization) by 12.3 and 3.8 points respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要