Gradient-Based Meta-Learning Using Adaptive Multiple Loss Weighting and Homoscedastic Uncertainty

2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE)(2023)

Cited 0|Views8
No score
Abstract
Model-agnostic meta-learning schemes adopt gradient descent to learn task commonalities and obtain the initialization parameters of the meta-model to rapidly adjust to new tasks with only a few training samples. Therefore, such schemes have become the mainstream meta-learning approach for studying few shot learning problems. This study mainly addresses the challenge of task uncertainty in few-shot learning and proposes an improved meta-learning approach, which first enables a task specific learner to select the initial parameter that minimize the loss of a new task, then generates weights by comparing meta-loss differences, and finally leads into the homoscedastic uncertainty of the task to weight the diverse losses. Our model conducts superior on few shot learning task than previous meta learning approach and improves its robustness regardless of the initial learning rates and query sets.
More
Translated text
Key words
meta-learning,homoscedastic uncertainty,meta-loss
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined