Magnetoencephalogram-based brain-computer interface for hand-gesture decoding using deep learning.

Cerebral cortex (New York, N.Y. : 1991)(2023)

引用 0|浏览13
暂无评分
摘要
Advancements in deep learning algorithms over the past decade have led to extensive developments in brain-computer interfaces (BCI). A promising imaging modality for BCI is magnetoencephalography (MEG), which is a non-invasive functional imaging technique. The present study developed a MEG sensor-based BCI neural network to decode Rock-Paper-scissors gestures (MEG-RPSnet). Unique preprocessing pipelines in tandem with convolutional neural network deep-learning models accurately classified gestures. On a single-trial basis, we found an average of 85.56% classification accuracy in 12 subjects. Our MEG-RPSnet model outperformed two state-of-the-art neural network architectures for electroencephalogram-based BCI as well as a traditional machine learning method, and demonstrated equivalent and/or better performance than machine learning methods that have employed invasive, electrocorticography-based BCI using the same task. In addition, MEG-RPSnet classification performance using an intra-subject approach outperformed a model that used a cross-subject approach. Remarkably, we also found that when using only central-parietal-occipital regional sensors or occipitotemporal regional sensors, the deep learning model achieved classification performances that were similar to the whole-brain sensor model. The MEG-RSPnet model also distinguished neuronal features of individual hand gestures with very good accuracy. Altogether, these results show that noninvasive MEG-based BCI applications hold promise for future BCI developments in hand-gesture decoding.
更多
查看译文
关键词
deep learning,brain–computer,magnetoencephalogram-based,hand-gesture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要