Memory-Enhanced Sequential Variational Transformer for Inferring Neural Population Dynamics

Zhixiang Zhang,Zhengdong Wang, Jie Zhou,Biao Jie

2023 International Conference on Image Processing, Computer Vision and Machine Learning (ICICML)(2023)

Cited 0|Views1
No score
Abstract
As deep learning continues to expand its applications in neuroscience, studies on the dynamics of neural populations such as LFADS seek to infer the latent factors from the high-dimensional sequential features. However, these methods neglect the effects of the historical states on the current states of neurons. Therefore, we propose an innovative memory enhanced sequential variational transformer (MESVT), which combines the current states of neural population features with the historical states through the cross-attention mechanism. Furthermore, we validate the performance of MESVT on two collected single-cell-level recording datasets. Utilizing MESVT, we analyze the potential dynamic encoding mechanism of neural populations in the VISP (Ventrolateral Intraparietal Sulcus) for visual stimuli.
More
Translated text
Key words
dynamics,neural populations,deep learning,variational inference,transformer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined