2020 IEEE International Students' Conference on Electrical,Electronics and Computer Science (SCEECS) 2020
DOI: 10.1109/sceecs48394.2020.94
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Various Learning Rate Scheduling Techniques on Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(17 citation statements)
references
References 10 publications
0
16
0
1
Order By: Relevance
“…Thus, there exists a differentiation over water area marked by yellow in final output prediction map. Channels (55,56,57) cover lower bank and edges, however these are markedly different than the edges picked by (37,38,39). Channels (58,59,60) get activated at the right bank, marked by cyan in final output prediction map.…”
Section: Discussionmentioning
confidence: 81%
“…Thus, there exists a differentiation over water area marked by yellow in final output prediction map. Channels (55,56,57) cover lower bank and edges, however these are markedly different than the edges picked by (37,38,39). Channels (58,59,60) get activated at the right bank, marked by cyan in final output prediction map.…”
Section: Discussionmentioning
confidence: 81%
“…In deep learning, the learning rate is one of the hyperparameters that decides the step size at each time the model progresses to a minimum loss function. Hence, it is crucial to optimize the learning rate properly; otherwise, the model may converge slowly with too small learning rate or diverge from the optimal error points with large learning rate values [55]. is study sets three values (1e − 2, 1e − 3, or 1e − 4) to choose using Keras Tuner.…”
Section: Learning Ratementioning
confidence: 99%
“…Choosing the learning rate may be difficult, since a value that is too small can lead to a lengthy training procedure with significant training error, whereas a value too big can lead to learning a sub-optimal set of weights too quickly (without reaching the local minima) or an unstable training process [51]. To reduce the learning rate, ReduceLROnPlateau was used [52]. When learning becomes static, models frequently benefit from reducing the learning rate by a factor of 2-10.…”
Section: Learning Rate Optimisationmentioning
confidence: 99%