ELF-UA: Efficient Label-Free User Adaptation in Gaze Estimation
arxiv(2024)
摘要
We consider the problem of user-adaptive 3D gaze estimation. The performance
of person-independent gaze estimation is limited due to interpersonal
anatomical differences. Our goal is to provide a personalized gaze estimation
model specifically adapted to a target user. Previous work on user-adaptive
gaze estimation requires some labeled images of the target person data to
fine-tune the model at test time. However, this can be unrealistic in
real-world applications, since it is cumbersome for an end-user to provide
labeled images. In addition, previous work requires the training data to have
both gaze labels and person IDs. This data requirement makes it infeasible to
use some of the available data. To tackle these challenges, this paper proposes
a new problem called efficient label-free user adaptation in gaze estimation.
Our model only needs a few unlabeled images of a target user for the model
adaptation. During offline training, we have some labeled source data without
person IDs and some unlabeled person-specific data. Our proposed method uses a
meta-learning approach to learn how to adapt to a new user with only a few
unlabeled images. Our key technical innovation is to use a generalization bound
from domain adaptation to define the loss function in meta-learning, so that
our method can effectively make use of both the labeled source data and the
unlabeled person-specific data during training. Extensive experiments validate
the effectiveness of our method on several challenging benchmarks.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要