2023 International Conference on Advancement in Computation &Amp; Computer Technologies (InCACCT) 2023
DOI: 10.1109/incacct57535.2023.10141714
|View full text |Cite
|
Sign up to set email alerts
|

Potato leaf disease prediction using RMSProp, Adam and SGD optimizers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 10 publications
0
1
0
Order By: Relevance
“…Our final training is conducted in Graham using 4 GPUs simultaneously, in line with a distributed training strategy called with tf.distribute.MirroredStrategy(). 10 We have also performed extensive DeepSC training using SGD with momentum [50] and RMSprop [51]. Although we have obtained much better training results with SGD with Momentum than with RMSprop, none of these optimizers has led to better training results than the ones we have obtained with Adam.…”
Section: Training Results Of Deepscmentioning
confidence: 84%
“…Our final training is conducted in Graham using 4 GPUs simultaneously, in line with a distributed training strategy called with tf.distribute.MirroredStrategy(). 10 We have also performed extensive DeepSC training using SGD with momentum [50] and RMSprop [51]. Although we have obtained much better training results with SGD with Momentum than with RMSprop, none of these optimizers has led to better training results than the ones we have obtained with Adam.…”
Section: Training Results Of Deepscmentioning
confidence: 84%
“…VGG is a series of CNN architectures with increasing depth and complexity, demonstrating the power of deeper networks for image classification. Some of the Optimization algorithms include RMSProp helps to adaptively adjust learning rates for individual parameters based on squared gradients, effective for non-stationary data, ADAM [15] combines features of RMSProp and AdaGrad, addressing their limitations and achieving efficient learning with momentum, Jaya [27] is a nature-inspired optimizer based on the foraging behaviour of jays, demonstrating robustness and performance across various tasks.…”
Section: Introductionmentioning
confidence: 99%