2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.204
|View full text |Cite
|
Sign up to set email alerts
|

Building a Regular Decision Boundary with Deep Networks

Abstract: In this work, we build a generic architecture of Convolutional Neural

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
8
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 29 publications
2
8
0
Order By: Relevance
“…The i-RevNet is constructed through a cascade of homeomorphic layers which can be fully inverted, with an explicit formula, up to the last hidden layer and therefore no information is discarded. The phenomenon of progressive separation and contraction in non-invertible networks is also observed in several other works [15,21].…”
supporting
confidence: 72%
“…The i-RevNet is constructed through a cascade of homeomorphic layers which can be fully inverted, with an explicit formula, up to the last hidden layer and therefore no information is discarded. The phenomenon of progressive separation and contraction in non-invertible networks is also observed in several other works [15,21].…”
supporting
confidence: 72%
“…3.1 for this case shows that the layerwise k = 1 procedure will try to progressively improve the linear separability. Progressive linear separation has been empirically studied in end-to-end CNNs (Zeiler & Fergus, 2014;Oyallon, 2017) as an indirect consequence, while the k = 1 training permits us to study this basic principle more directly as the layer objective. Concurrent work (Elad et al, 2019) follows the same argument to use a layerwise training procedure to evaluate mutual information more directly.…”
Section: Auxiliary Problems and The Properties They Inducementioning
confidence: 99%
“…First, the use of a global objective means that the final functional behavior of individual intermediate layers of a deep network is only indirectly specified: it is unclear how the layers work together to achieve high-accuracy predictions. Several authors have suggested and shown empirically that CNNs learn to implement mechanisms that progressively induce invariance to complex, but irrelevant variability (Mallat, 2016;Yosinski et al, 2015) while increasing linear separability (Zeiler & Fergus, 2014;Oyallon, 2017;Jacobsen et al, 2018) of the data. Progressive linear separability has been shown empirically but it is unclear whether this is merely the consequence of other strategies implemented by CNNs, or if it is a sufficient condition for the high performance of these networks.…”
Section: Introductionmentioning
confidence: 99%
“…One of the reasons why GCNs lack interpretability is because no training objective is assigned to a specific layer except the final one: end-to-end training makes their analysis difficult [34]. They also tend to oversmooth graph representations [47], because applying successively an averaging operator leads to smoother representations.…”
Section: Introductionmentioning
confidence: 99%