Contrasting action and posture coding with hierarchical deep neural network models of proprioception

eLife(2022)

Cited 0|Views1
No score
Abstract
Biological motor control is versatile and efficient. Muscles are flexible and undergo continuous changes, requiring distributed adaptive control mechanisms. How proprioception solves this problem in the brain is unknown. The canonical role of proprioception is representing the body state, yet we hypothesize that the proprioceptive system can decode high-level, multi-feature actions. To test this theory, we pursue a task-driven modeling approach.We generated a large synthetic dataset of human arm trajectories tracing the alphabet in 3D space and use a musculoskeletal model plus modeled muscle spindle inputs to extract muscle activity. We then contrast two tasks, one character trajectory-decoding and another action recognition task that allows training of hierarchical models to decode position, or classify the character identity from the spindle firing patterns. Artificial neural networks could robustly solve these tasks, and the networks’ units show tuning properties akin to neurons in the primate somatosensory cortex and the brainstem. Remarkably, only the action-recognition trained, and not the trajectory decoding trained, models possess directional selective units (which are also uniformly distributed), as in the primate brain. Taken together, our model is the first to link tuning properties in the proprioceptive system at multiple levels to the behavioral level. We find that action-recognition, rather than the canonical trajectory-decoding hypothesis, better explains what is known about the proprioceptive system. ### Competing Interest Statement The authors have declared no competing interest.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined