2020
DOI: 10.1007/s10618-020-00722-8
|View full text |Cite
|
Sign up to set email alerts
|

A survey of deep network techniques all classifiers can adopt

Abstract: Deep neural networks (DNNs) have introduced novel and useful tools to the machine learning community. Other types of classifiers can potentially make use of these tools as well to improve their performance and generality. This paper reviews the current state of the art for deep learning classifier technologies that are being used outside of deep neural networks. Non-neural network classifiers can employ many components found in DNN architectures. In this paper, we review the feature learning, optimization, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 105 publications
0
8
0
Order By: Relevance
“…The amounts of data and computational power required for learning have increased. Deep learning uses DNNs with hundreds of layers and a large number of parameters related to structure [145][146][147][148][149][150][151][152][153][154][155][156][157][158][159][160][161]. Therefore, it is prone to overfitting, which is a condition where the learning data are overfitted, generalization is not possible, and high accuracy cannot be achieved with unknown data.…”
Section: Amounts Of Data and Computational Powermentioning
confidence: 99%
“…The amounts of data and computational power required for learning have increased. Deep learning uses DNNs with hundreds of layers and a large number of parameters related to structure [145][146][147][148][149][150][151][152][153][154][155][156][157][158][159][160][161]. Therefore, it is prone to overfitting, which is a condition where the learning data are overfitted, generalization is not possible, and high accuracy cannot be achieved with unknown data.…”
Section: Amounts Of Data and Computational Powermentioning
confidence: 99%
“…For example, not having a sufficient sample size incorporates potential bias or noise in the trained algorithm, which results in low replicability and performance [64,65]. In response, efficient resampling schemes for classifiers can be proposed for addressing the limited sample size problem and upgrading their generalizability accordingly, which is an important aspect in technology adoption, as shown in several investigations [21,66]. Generalizability problems are one of the main barriers to the effective implementation of classifiers in the clinical context.…”
Section: Calculation Of Fuzzy Relative Priorities and Interdependence...mentioning
confidence: 99%
“…Specifically, the pooling operation computes the local statistical characteristics of a location to replace the elements of the input data on that location region. rough the operation of the pooling layer, the number of parameters in the entire network is reduced, which alleviates the complexity of parameter calculation and storage [15,16]. e average pooling operation can be expressed as Computational Intelligence and Neuroscience Setting j…”
Section: Convolutionmentioning
confidence: 99%