Fast and Expressive Gesture Recognition using a Combination-Homomorphic Electromyogram Encoder.
CoRR(2023)
Abstract
We study the task of gesture recognition from electromyography (EMG), with
the goal of enabling expressive human-computer interaction at high accuracy,
while minimizing the time required for new subjects to provide calibration
data. To fulfill these goals, we define combination gestures consisting of a
direction component and a modifier component. New subjects only demonstrate the
single component gestures and we seek to extrapolate from these to all possible
single or combination gestures. We extrapolate to unseen combination gestures
by combining the feature vectors of real single gestures to produce synthetic
training data. This strategy allows us to provide a large and flexible gesture
vocabulary, while not requiring new subjects to demonstrate combinatorially
many example gestures. We pre-train an encoder and a combination operator using
self-supervision, so that we can produce useful synthetic training data for
unseen test subjects. To evaluate the proposed method, we collect a real-world
EMG dataset, and measure the effect of augmented supervision against two
baselines: a partially-supervised model trained with only single gesture data
from the unseen subject, and a fully-supervised model trained with real single
and real combination gesture data from the unseen subject. We find that the
proposed method provides a dramatic improvement over the partially-supervised
model, and achieves a useful classification accuracy that in some cases
approaches the performance of the fully-supervised model.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined