Chrome Extension
WeChat Mini Program
Use on ChatGLM

Learning Approximate Invariance Requires Far Fewer Data

Pan-African Artificial Intelligence and Smart Systems(2023)

Cited 0|Views17
No score
Abstract
Efficient learning, that is, learning with small datasets is difficult for current deep learning models. Invariance has been conjectured to be the key for its generalization potential. One of the most used procedure to learn invariant models is data augmentation (DA). Data augmentation can be performed offline by augmenting the data before any training, or it can be performed online during training. However, applying those technique won’t yield better generalization gains every time. We frame this problem as the stability of generalization gains made by invariance inducing techniques. In this study we introduced a new algorithm to train an approximate invariant priors before posterior training of Bayesian Neural Network (BNN). Furthermore, we compared the generalization stability of our invariance inducing algorithm with online DA and offline DA on MNIST and Fashion MNIST with three perturbation processes: rotation, noise, and rotation+noise. Results showed that learning approximate invariant priors requires less exposure to the perturbation process, but it leads BNN to more stable generalization gains during posterior training. Finally, we also show that invariance inducing techniques enhance uncertainty in Bayesian Neural Networks.
More
Translated text
Key words
Learning approximate invariance, Dataset, Bayesian neural network, Algorithm
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined