Chrome Extension
WeChat Mini Program
Use on ChatGLM

Convex Formulation of Robust Two-layer Neural Network Training

semanticscholar(2021)

Cited 0|Views4
No score
Abstract
Recent work has shown that the training of a two-layer, scalar-output fully-connected neural network with ReLU activations can be reformulated as a finite-dimensional convex program. Leveraging this result, we derive convex optimization approaches to solve the "adversarial training" problem, which trains neural networks that are robust to adversarial input perturbations. These convex problems are derived for the cases when hinge loss and squared loss between the network output and the target are used to calculate the training cost. Our work provides an alternative adversarial training method over the current approximation methods, such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). We demonstrate in different experiments that the proposed method achieves a significantly higher adversarial robustness than existing training methods.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined