Multimodal Multi-Channel On-Line Speaker Diarization Using Sensor Fusion Through SVM

IEEE Trans. Multimedia(2015)

引用 41|浏览31
暂无评分
摘要
Speaker diarization (SD) is the process of assigning speech segments of an audio stream to its corresponding speakers, thus comprising the problem of voice activity detection (VAD), speaker labeling/identification, and often sound source localization (SSL). Most research activities in the past aimed towards applications as broadcast news, meetings, conversational telephony, and automatic multimodal data annotation, where SD may be performed off-line. However, a recent research focus is human-computer interaction (HCI) systems where SD must be performed on-line, and in real-time, as in modern gaming devices and interaction with large displays. Often, such applications further suffer from noise, reverberations, and overlapping speech, making them increasingly challenging. In such situations, multimodal/multisensory approaches can provide more accurate results than unimodal ones, given a data stream may compensate for occasional instabilities of other modalities. Accordingly, this paper presents an on-line multimodal SD algorithm designed to work in a realistic environment with multiple, overlapping speakers. Our work employs a microphone array, a color camera, and a depth sensor as input streams, from which speech-related features are extracted to be later merged through a support vector machine approach consisting of VAD and SSL modules. Speaker identification is incorporated through a hybrid technique of face positioning history and face recognition. Our final SD approach experimentally achieves an average diarization error rate of 11.48% in scenarios with up to three simultaneous speakers, and is able to run 3.2 × real-time.
更多
查看译文
关键词
Speech,Feature extraction,Microphones,Human computer interaction,Arrays,Robustness,Speech processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要