2014
DOI: 10.1007/978-3-662-44654-6_25
|View full text |Cite
|
Sign up to set email alerts
|

Limited Generalization Capabilities of Autoencoders with Logistic Regression on Training Sets of Small Sizes

Abstract: Part 6: Classification Pattern RecognitionInternational audienceDeep learning is promising approach to extract useful nonlinear representations of data. However, it is usually applied with large training sets, which are not always available in practical tasks. In this paper, we consider stacked autoencoders with logistic regression as the classification layer and study their usefulness for the task of image categorization depending on the size of training sets. Hand-crafted image descriptors are proposed and u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…However, in some areas of the medical field, such as rare diseases, only a small dataset is available. Dimensionality reduction using AE is primarily performed via semi‐supervised learning and owing to its low risk of overlearning, it has attracted considerable attention as a suitable feature‐reduction method for DL on a small dataset 26,27 . In this study, 3D‐CCVAE performed better than the without AE model, indicating that dimensionality reduction via AE can be used to learn a DL model without increasing the overlearning risk.…”
Section: Discussionmentioning
confidence: 74%
“…However, in some areas of the medical field, such as rare diseases, only a small dataset is available. Dimensionality reduction using AE is primarily performed via semi‐supervised learning and owing to its low risk of overlearning, it has attracted considerable attention as a suitable feature‐reduction method for DL on a small dataset 26,27 . In this study, 3D‐CCVAE performed better than the without AE model, indicating that dimensionality reduction via AE can be used to learn a DL model without increasing the overlearning risk.…”
Section: Discussionmentioning
confidence: 74%
“…However, to do this, it is required to set up a fairly large data base with the possibility of monitoring the orientation of the patterns to be recognized. Taking into account our investigations, which show that deep-learning networks are limited in extracting a priori unknown invariants, as well as the limited generalizing capability of these networks on samples of small size, 17 it can be concluded that the deep-learning paradigm requires appreciable extension.…”
Section: Generalizing Capability Of Cnns During Training With Rotatiomentioning
confidence: 90%