Benchmarking Neural Decoding Backbones towards Enhanced On-edge iBCI Applications
arxiv(2024)
摘要
Traditional invasive Brain-Computer Interfaces (iBCIs) typically depend on
neural decoding processes conducted on workstations within laboratory settings,
which prevents their everyday usage. Implementing these decoding processes on
edge devices, such as the wearables, introduces considerable challenges related
to computational demands, processing speed, and maintaining accuracy. This
study seeks to identify an optimal neural decoding backbone that boasts robust
performance and swift inference capabilities suitable for edge deployment. We
executed a series of neural decoding experiments involving nonhuman primates
engaged in random reaching tasks, evaluating four prospective models, Gated
Recurrent Unit (GRU), Transformer, Receptance Weighted Key Value (RWKV), and
Selective State Space model (Mamba), across several metrics: single-session
decoding, multi-session decoding, new session fine-tuning, inference speed,
calibration speed, and scalability. The findings indicate that although the GRU
model delivers sufficient accuracy, the RWKV and Mamba models are preferable
due to their superior inference and calibration speeds. Additionally, RWKV and
Mamba comply with the scaling law, demonstrating improved performance with
larger data sets and increased model sizes, whereas GRU shows less pronounced
scalability, and the Transformer model requires computational resources that
scale prohibitively. This paper presents a thorough comparative analysis of the
four models in various scenarios. The results are pivotal in pinpointing an
optimal backbone that can handle increasing data volumes and is viable for edge
implementation. This analysis provides essential insights for ongoing research
and practical applications in the field.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要