MUST Augment: Efficient Augmentation with Multi-stage Stochastic Strategy

ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I(2022)

Cited 0|Views4
No score
Abstract
Data augmentation has been widely used for enhancing data diversity and deep-learning model generalization. Recent research achieved better accuracies on image classification tasks by introducing automated search for optimal augmentation policies, but incurred high computation cost and long search time because of large search spaces and complex searching algorithms. In this paper, we present an augmentation method called MUST (MUlti-Stage sTochastic) Augment which completely skips policy searching. Instead of searching, our method applies a multi-stage augmentation strategy on top of a simple stochastic augmentation mechanism. This multi-stage complexity driven augmentation strategy ensures the whole training process converges smoothly to a good quality model; and within individual stages, it applies augmentation in a stochastic manner and provides both scalability and diversity by introducing more augmentation operations without extra search cost. Our extensive experiments with state-of-the-art results show that our method has advantages in both accuracy and efficiency compared to search-based augmentation methods. Besides image classification, we also examine the general validity of MUST on Face Recognition and Text Detection tasks, and demonstrate the effectiveness of our method across various CV tasks.
More
Translated text
Key words
Data augmentation, Multi-stage, Stochastic
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined