2020
DOI: 10.1016/j.neucom.2020.02.113
|View full text |Cite
|
Sign up to set email alerts
|

Visualising basins of attraction for the cross-entropy and the squared error neural network loss functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
41
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 62 publications
(47 citation statements)
references
References 33 publications
(47 reference statements)
5
41
0
1
Order By: Relevance
“…To minimize the overfitting, the process of the training was forced to finish before repetition K if no improvement was perceived for seven iterations, this control was made using early stopping [50] . As the COVID-19 dataset used is a binary-class classification problem, the DenseNet121 is compiled with the binary cross-entropy [51] . The Adam optimizer algorithm [52] was used with a constant learning rate =2e-5 within the feature extraction method.…”
Section: Experiments Resultsmentioning
confidence: 99%
“…To minimize the overfitting, the process of the training was forced to finish before repetition K if no improvement was perceived for seven iterations, this control was made using early stopping [50] . As the COVID-19 dataset used is a binary-class classification problem, the DenseNet121 is compiled with the binary cross-entropy [51] . The Adam optimizer algorithm [52] was used with a constant learning rate =2e-5 within the feature extraction method.…”
Section: Experiments Resultsmentioning
confidence: 99%
“…Bosman et al [16][17][18][19] applied and adapted standard fitness landscape analysis techniques to error landscapes. Studies include: the influence of search space boundaries on the landscape analysis [16], the influence of regularisation on error surfaces [17], the influence of architecture settings on modality of the landscape [18], and the effect of different loss functions on the basins of attraction [19].…”
Section: Error Landscapesmentioning
confidence: 99%
“…In general, each algorithm has a loss function in deep learning tasks [37], [38]. The algorithm minimizes loss function to allow better model fitting training data.…”
Section: ) Focus On Weightsmentioning
confidence: 99%