A Systematic Evaluation of Adversarial Attacks against Speech Emotion Recognition Models
arxiv(2024)
摘要
Speech emotion recognition (SER) is constantly gaining attention in recent
years due to its potential applications in diverse fields and thanks to the
possibility offered by deep learning technologies. However, recent studies have
shown that deep learning models can be vulnerable to adversarial attacks. In
this paper, we systematically assess this problem by examining the impact of
various adversarial white-box and black-box attacks on different languages and
genders within the context of SER. We first propose a suitable methodology for
audio data processing, feature extraction, and CNN-LSTM architecture. The
observed outcomes highlighted the significant vulnerability of CNN-LSTM models
to adversarial examples (AEs). In fact, all the considered adversarial attacks
are able to significantly reduce the performance of the constructed models.
Furthermore, when assessing the efficacy of the attacks, minor differences were
noted between the languages analyzed as well as between male and female speech.
In summary, this work contributes to the understanding of the robustness of
CNN-LSTM models, particularly in SER scenarios, and the impact of AEs.
Interestingly, our findings serve as a baseline for a) developing more robust
algorithms for SER, b) designing more effective attacks, c) investigating
possible defenses, d) improved understanding of the vocal differences between
different languages and genders, and e) overall, enhancing our comprehension of
the SER task.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要