2021
DOI: 10.1142/s0218202521500500
|View full text |Cite
|
Sign up to set email alerts
|

Stability for the training of deep neural networks and other classifiers

Abstract: We examine the stability of loss-minimizing training processes that are used for deep neural networks (DNN) and other classifiers. While a classifier is optimized during training through a so-called loss function, the performance of classifiers is usually evaluated by some measure of accuracy, such as the overall accuracy which quantifies the proportion of objects that are well classified. This leads to the guiding question of stability: does decreasing loss through training always result in increased accuracy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 16 publications
1
6
0
Order By: Relevance
“…The main goal of this paper is to provide conditions on the training set T which ensure that this does not happen, and so accuracy is bounded from below as the loss decreases. The results in this paper build on similar stability of accuracy results for classifiers given in [BJS21].…”
Section: Introductionsupporting
confidence: 75%
See 3 more Smart Citations
“…The main goal of this paper is to provide conditions on the training set T which ensure that this does not happen, and so accuracy is bounded from below as the loss decreases. The results in this paper build on similar stability of accuracy results for classifiers given in [BJS21].…”
Section: Introductionsupporting
confidence: 75%
“…It is useful to review the first results given in this direction. In [BJS21] it was shown that small clusters in δX lead to instability of accuracy, and so a doubling condition on δX was introduced which prevents the existence of small clusters.…”
Section: Previous Results In This Directionmentioning
confidence: 99%
See 2 more Smart Citations
“…The test indicators before and after the experiment were tested by paired T test, and the data results were expressed as mean ± standard deviation (x ± SD), p>0.05 means that the difference is not significant, 0.01<P≤0.05 means the difference is significant, and P≤0.01 means the difference is highly significant. 7,8…”
Section: Mathematical Statisticsmentioning
confidence: 99%