Probabilistic Test-Time Generalization by Variational Neighbor-Labeling
arxiv(2023)
Abstract
This paper strives for domain generalization, where models are trained
exclusively on source domains before being deployed on unseen target domains.
We follow the strict separation of source training and target testing, but
exploit the value of the unlabeled target data itself during inference. We make
three contributions. First, we propose probabilistic pseudo-labeling of target
samples to generalize the source-trained model to the target domain at test
time. We formulate the generalization at test time as a variational inference
problem, by modeling pseudo labels as distributions, to consider the
uncertainty during generalization and alleviate the misleading signal of
inaccurate pseudo labels. Second, we learn variational neighbor labels that
incorporate the information of neighboring target samples to generate more
robust pseudo labels. Third, to learn the ability to incorporate more
representative target information and generate more precise and robust
variational neighbor labels, we introduce a meta-generalization stage during
training to simulate the generalization procedure. Experiments on seven
widely-used datasets demonstrate the benefits, abilities, and effectiveness of
our proposal.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined