Sensory Audio Focusing Detection Using Brain-Computer Interface Archetype

2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)(2019)

引用 2|浏览0
暂无评分
摘要
Everyday people are placed in environments where countless conversations simultaneously take place within earshot. Speech intelligibility in the presence of multiple speakers, commonly known as the 'Cocktail Party Phenomenon', is significantly reduced for most hearing-impaired listeners who use hearing assistive devices [1]. Prior research addressing this issue include noise filtering based on trajectories of multiple moving speakers and locations of talking targets based on face detection [2][3]. This study focuses on the practicality of audio filtering through measuring electroencephalogram (EEG) signals using a Brain-Computer Interfaces (BCI) system. The study explores the use of machine learning algorithms to classify which speaker the listener is focusing on. In this study, training data is obtained of a listener focusing on one auditory stimulus (audiobook) while other auditory stimuli are presented at the same time. A g.Nautilus BCI headset was used to obtain EEG data. After collecting trial data for each audio source, a machine learning algorithm trains a classifier to distinguish one audiobook between another. Data was collected from five subjects in each trial. Results yielded an accuracy of above 90% from all three experiments.
更多
查看译文
关键词
Machine Learning,Speech Intelligability,Electroencephalogram,Brain-Computer Interface,Blinded Signal Separation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要