MAFIA: Multi-Adapter Fused Inclusive LanguAge Models
CoRR(2024)
摘要
Pretrained Language Models (PLMs) are widely used in NLP for various tasks.
Recent studies have identified various biases that such models exhibit and have
proposed methods to correct these biases. However, most of the works address a
limited set of bias dimensions independently such as gender, race, or religion.
Moreover, the methods typically involve finetuning the full model to maintain
the performance on the downstream task. In this work, we aim to modularly
debias a pretrained language model across multiple dimensions. Previous works
extensively explored debiasing PLMs using limited US-centric counterfactual
data augmentation (CDA). We use structured knowledge and a large generative
model to build a diverse CDA across multiple bias dimensions in a
semi-automated way. We highlight how existing debiasing methods do not consider
interactions between multiple societal biases and propose a debiasing model
that exploits the synergy amongst various societal biases and enables
multi-bias debiasing simultaneously. An extensive evaluation on multiple tasks
and languages demonstrates the efficacy of our approach.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要