2021
DOI: 10.48550/arxiv.2105.12237
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Practical Convex Formulation of Robust One-hidden-layer Neural Network Training

Abstract: Recent work has shown that the training of a one-hidden-layer, scalar-output fullyconnected ReLU neural network can be reformulated as a finite-dimensional convex program. Unfortunately, the scale of such a convex program grows exponentially in data size. In this work, we prove that a stochastic procedure with a linear complexity well approximates the exact formulation. Moreover, we derive a convex optimization approach to efficiently solve the "adversarial training" problem, which trains neural networks that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 7 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?