2019
DOI: 10.1080/01431161.2019.1694725
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of various optimizers on deep convolutional neural network model in the application of hyperspectral remote sensing image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
61
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 184 publications
(83 citation statements)
references
References 37 publications
0
61
0
1
Order By: Relevance
“…The Cross-Entropy Loss, also referred to as Logarithmic Loss, is the most suitable for both binary and multi-class classification problems [26] and, thus, it was selected for the optimization process. For the training of the network of the classifier, several optimizers that are used in state-of-the-art methods [12] were experimented with, and those which achieved the best results, depicted in the following section, were the following: Adadelta, Adam, RMSprop, and SGD (Stochastic Gradient Descent).…”
Section: ) Lightweight Volumetric Cnn Architecturementioning
confidence: 99%
“…The Cross-Entropy Loss, also referred to as Logarithmic Loss, is the most suitable for both binary and multi-class classification problems [26] and, thus, it was selected for the optimization process. For the training of the network of the classifier, several optimizers that are used in state-of-the-art methods [12] were experimented with, and those which achieved the best results, depicted in the following section, were the following: Adadelta, Adam, RMSprop, and SGD (Stochastic Gradient Descent).…”
Section: ) Lightweight Volumetric Cnn Architecturementioning
confidence: 99%
“…Adam is a random objective function optimization algorithm based on a one-step degree and also an adaptive estimation algorithm based on a low-order matrix. This method has high calculation efficiency and small memory requirements and is very suitable for optimization problems with a large data volume or many parameters [44]. The algorithm is as follows:…”
Section: Adaptive Moment Estimation Optimizer (Adam)mentioning
confidence: 99%
“…x Ne ) represent a leading bee population, and X0 represent the initial leading bee population. The bee population can be initialized by Equation (1).…”
Section: Review Of Abcmentioning
confidence: 99%
“…Compared with traditional classification methods such as maximum likelihood classification, K-Means, ISODATA, and deep learning methods, non-parametric classification methods such as decision trees and support vector machines do not need to make too many assumptions about the data distribution, and have obvious advantages in solving problems such as small samples, nonlinearity, local minimums, etc. [1], [2], so they have been widely used in remote sensing data classification. In particular, the SVM algorithm does not require the assumption of normal distribution of the data, and has outstanding The associate editor coordinating the review of this manuscript and approving it for publication was Zhihan Lv . advantages in solving small sample, nonlinear, and highdimensional data classification problems, so it is more suitable for classification problems in complex surface environments [3], [4], Such as urban scenes [5]- [9], crops [10]- [12], forests [13], [14], wetlands [15], and so on.…”
Section: Introductionmentioning
confidence: 99%