2018
DOI: 10.48550/arxiv.1805.10451
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Geometric Understanding of Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(23 citation statements)
references
References 15 publications
0
23
0
Order By: Relevance
“…DeconvNet (Zeiler and Fergus 2014), LIME (Ribeiro, Singh, and Guestrin 2016) and SincNet (Ravanelli and Bengio 2018) trains a new model to explain the trained model. Geometric analyis could also reveal the internal structure indirectly (Montufar et al 2014;Lei et al 2018;Fawzi et al 2018). The activation maximization (Erhan, Courville, and Bengio 2010), or GANs (Nguyen et al 2016) have been used to explain the neural network by using examples.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…DeconvNet (Zeiler and Fergus 2014), LIME (Ribeiro, Singh, and Guestrin 2016) and SincNet (Ravanelli and Bengio 2018) trains a new model to explain the trained model. Geometric analyis could also reveal the internal structure indirectly (Montufar et al 2014;Lei et al 2018;Fawzi et al 2018). The activation maximization (Erhan, Courville, and Bengio 2010), or GANs (Nguyen et al 2016) have been used to explain the neural network by using examples.…”
Section: Related Workmentioning
confidence: 99%
“…Although such regions are complicated, each region for a single classification in DNN classifiers is shown to be topologically connected (Fawzi et al 2018). It has also been shown that the manifolds learned by DNNs and distributions over them are highly related to the representation capability of a network (Lei et al 2018).…”
Section: Geometric Analysis On the Inside Of Deep Neural Networkmentioning
confidence: 99%
“…Measuring certain complexities of a fixed neural network via counting linear pieces arises in several recent works (e.g. (Lei et al, 2018)), and the question whether or not the dependence on n is redundant (comparing to that of sparse recovery guarantees) warrants further studies.…”
Section: Statistical Guaranteementioning
confidence: 99%
“…Manifold learning has been widely applied in many computer vision tasks, such as the face recognition [43], [44], image classification [28], as well as in the literature of hyperspectral image [42], [29]. Generally, a data manifold follows the law of manifold distribution: in real-world applications, high-dimensional data of the same class usually draws close to a low dimensional manifold [21]. Therefore, hyperspectral images, which provide a dense spectral sampling at each pixel, possess good intrinsic manifold structure.…”
Section: Introductionmentioning
confidence: 99%