Chrome Extension
WeChat Mini Program
Use on ChatGLM

Comparison of Feature-Model Variants for coSpeech-EEG Classification

2020 National Conference on Communications (NCC)(2020)

Cited 1|Views23
No score
Abstract
One of the most significant obstacles that must be overcome in pursuing the utilization of brain signals for device control is the formulation of a robust signal processing method that can extract event specific information from real-time EEG signals. Typical Brain Computer Interface systems comprise of signal acquisition, feature extraction and classification modules. The focus in this paper is to experimentally evaluate various feature extraction and classification modules to comparatively determine the best performing feature-model(FM) pair. Few popular FM variants are implemented to classify units from coSpeech-EEG data collected during speech audition, imagination and production. Performance variations across sessions and subjects are also studied to analyse scalability and robustness of the various FM pairs. Simultaneous diagonalization of multiclass common spatial patterns obtained on EEG data coupled with a Gaussian mixture model based Hidden Markov Model proves to be the best FM pair for the task at hand rendering an average accuracy much higher than chance across 30 subjects in a multi-unit classification problem.
More
Translated text
Key words
Coherent Speech-EEG,GMM-HMM,2-level DP,simultaneous diagonalization,CSP,BCI
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined