Towards Understanding Dual BN In Hybrid Adversarial Training
arxiv(2024)
摘要
There is a growing concern about applying batch normalization (BN) in
adversarial training (AT), especially when the model is trained on both
adversarial samples and clean samples (termed Hybrid-AT). With the assumption
that adversarial and clean samples are from two different domains, a common
practice in prior works is to adopt Dual BN, where BN and BN are used for
adversarial and clean branches, respectively. A popular belief for motivating
Dual BN is that estimating normalization statistics of this mixture
distribution is challenging and thus disentangling it for normalization
achieves stronger robustness. In contrast to this belief, we reveal that
disentangling statistics plays a less role than disentangling affine parameters
in model training. This finding aligns with prior work (Rebuffi et al., 2023),
and we build upon their research for further investigations. We demonstrate
that the domain gap between adversarial and clean samples is not very large,
which is counter-intuitive considering the significant influence of adversarial
perturbation on the model accuracy. We further propose a two-task hypothesis
which serves as the empirical foundation and a unified framework for Hybrid-AT
improvement. We also investigate Dual BN in test-time and reveal that affine
parameters characterize the robustness during inference. Overall, our work
sheds new light on understanding the mechanism of Dual BN in Hybrid-AT and its
underlying justification.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要