Using Deep Autoencoders for In-vehicle Audio Anomaly Detection

KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KSE 2021)(2021)

引用 3|浏览5
暂无评分
摘要
Current developments on self-driving cars have increased the interest on autonomous shared taxicabs. While most self-driving technologies focus on the outside environment, there is also a need to provide in-vehicle intelligence (e.g., detect health and safety issues related with the car occupants). Set within an R&D project focused on in-vehicle cockpit intelligence, the research presented in this paper addresses an unsupervised Acoustic Anomaly Detection (AAD) task. Since data is nonexistent in this domain, we first design an in-vehicle sound event data simulator that can realistically mix background audios (recorded from car driving trips) with normal (e.g., people talking, radio on) and abnormal (e.g., people arguing, cough) event sounds, allowing the generation of three synthetic in-vehicle sound datasets. Then, we explore two main sound feature extraction methods (based on a combination of three audio features and mel frequency energy coefficients) and propose a novel Long Short-Term Memory Autoencoder (LSTM-AE) deep learning architecture for in-vehicle sound anomaly detection. Competitive results were achieved by the proposed LSTM-AE when compared with two state-of-the-art methods, namely a dense Autoencoder (AE) and a two-stage clustering. (C) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://crativecommons.org/licenses/by-nc-nd/4.0) Peer-review under responsibility of the scientific committee of KES International.
更多
查看译文
关键词
Anomaly Detection, Audio Input Representation, Deep Learning, In-vehicle Data, Unsupervised Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要