Bag of tricks for backdoor learning

Wireless Networks(2024)

Cited 0|Views4
No score
Abstract
Deep learning models are vulnerable to backdoor attacks, where an adversary aims to fool the model via data poisoning, such that the victim models perform well on clean samples but behave wrongly on poisoned samples. While researchers have studied backdoor attacks in depth, they have focused on specific attack and defense methods, neglecting the impacts of basic training tricks on the effect of backdoor attacks. Analyzing these influencing factors helps facilitate secure deep learning systems and explore novel defense perspectives. To this end, we provide comprehensive evaluations using a weak clean-label backdoor attack on CIFAR10, focusing on the impacts of a wide range of neglected training tricks on backdoor attacks. Specifically, we concentrate on ten perspectives, e.g., batch size, data augmentation, warmup, and mixup, etc. The results demonstrate that backdoor attacks are sensitive to some training tricks, and optimizing the basic training tricks can significantly improve the effect of backdoor attacks. For example, appropriate warmup settings can enhance the effect of backdoor attacks by 22
More
Translated text
Key words
Backdoor attacks,Tricks,Deep learning models
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined