Federated deep transfer learning for EEG decoding using multiple BCI tasks

arxiv(2023)

引用 0|浏览26
暂无评分
摘要
Deep learning is the state-of-the-art in BCI decoding. However, it is very data-hungry and training decoders requires pooling data from multiple sources. EEG data from various sources decrease the decoding performance due to negative transfer [1]. Recently, transfer learning for EEG decoding has been suggested as a remedy [2], [3] and become subject to recent BCI competitions (e.g. BEETL [4]), but there are two complications in combining data from many subjects. First, privacy is not protected as highly personal brain data needs to be shared (and copied across increasingly tight information governance boundaries). Moreover, BCI data are collected from different sources and are often with different BCI tasks, which has been thought to limit their reusability. Here, we demonstrate a federated deep transfer learning technique, the Multidataset Federated Separate-Common-Separate Network (MFSCSN) based on our previous work of SCSN [1], which integrates privacy-preserving properties into deep transfer learning to utilise data sets with different tasks. This framework trains a BCI decoder using different source data sets from different imagery tasks (e.g. some data sets with hands and feet, vs others with single hands and tongue, etc). Therefore, by introducing privacypreserving transfer learning techniques, we unlock the reusability and scalability of existing BCI data sets. We evaluated our federated transfer learning method on the NeurIPS 2021 BEETL competition BCI task. The proposed architecture outperformed the baseline decoder by 3%. Moreover, compared with the baseline and other transfer learning algorithms, our method protects the privacy of the brain data from different data centres.
更多
查看译文
关键词
Deep Learning,Transfer Learning,Domain Adaptation,Brain-Computer-Interfaces (BCI),Electroencephalography (EEG),Privacy-preserving AI,Federated Machine Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要