ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8683242
|View full text |Cite
|
Sign up to set email alerts
|

Convex Relaxations of Convolutional Neural Nets

Abstract: We propose convex relaxations for convolutional neural nets with one hidden layer where the output weights are fixed. For convex activation functions such as rectified linear units, the relaxations are convex second order cone programs which can be solved very efficiently. We prove that the relaxation recovers the global minimum under a planted model assumption, given sufficiently many training samples from a Gaussian distribution. We also identify a phase transition phenomenon in recovering the global minimum… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Another interesting research direction is investigating efficient relaxations of our vector output convex programs for larger scale simulations. Convex relaxations for scalar output ReLU networks with approximation guarantees were studied in (Bartan & Pilanci, 2019;Ergen & Pilanci, 2019b;a;d'Aspremont & Pilanci, 2020). Furthermore, landscapes of vector output neural networks and dynamics of gradient descent type methods can be analyzed by leveraging our results.…”
Section: Discussionmentioning
confidence: 92%
“…Another interesting research direction is investigating efficient relaxations of our vector output convex programs for larger scale simulations. Convex relaxations for scalar output ReLU networks with approximation guarantees were studied in (Bartan & Pilanci, 2019;Ergen & Pilanci, 2019b;a;d'Aspremont & Pilanci, 2020). Furthermore, landscapes of vector output neural networks and dynamics of gradient descent type methods can be analyzed by leveraging our results.…”
Section: Discussionmentioning
confidence: 92%
“…These results demonstrate that polynomial activation neural networks are a promising direction for further exploration.Convexity of infinitely wide neural networks was first considered in [8] and later in [5]. A convex geometric characterization of finite width neural networks was developed in [12,11,6]. Exact convex optimization representations of finite width two-layer ReLU neural network problems were developed first in [35] and extended to leaky ReLU [25] and polynomial activation functions [7].…”
mentioning
confidence: 99%