Generalizability Under Sensor Failure: Tokenization + Transformers Enable More Robust Latent Spaces
CoRR(2024)
摘要
A major goal in neuroscience is to discover neural data representations that
generalize. This goal is challenged by variability along recording sessions
(e.g. environment), subjects (e.g. varying neural structures), and sensors
(e.g. sensor noise), among others. Recent work has begun to address
generalization across sessions and subjects, but few study robustness to sensor
failure which is highly prevalent in neuroscience experiments. In order to
address these generalizability dimensions we first collect our own
electroencephalography dataset with numerous sessions, subjects, and sensors,
then study two time series models: EEGNet (Lawhern et al., 2018) and TOTEM
(Talukder et al., 2024). EEGNet is a widely used convolutional neural network,
while TOTEM is a discrete time series tokenizer and transformer model. We find
that TOTEM outperforms or matches EEGNet across all generalizability cases.
Finally through analysis of TOTEM's latent codebook we observe that
tokenization enables generalization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要