2020
DOI: 10.3390/app10217817
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Evaluation of the Effect of Optimization and Regularization Techniques on the Generalization Performance of Deep Convolutional Neural Network

Abstract: The main goal of any classification or regression task is to obtain a model that will generalize well on new, previously unseen data. Due to the recent rise of deep learning and many state-of-the-art results obtained with deep models, deep learning architectures have become one of the most used model architectures nowadays. To generalize well, a deep model needs to learn the training data well without overfitting. The latter implies a correlation of deep model optimization and regularization with generalizatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 19 publications
0
11
0
1
Order By: Relevance
“…We found one limitation in Moradi et al [5], Srivastava et al [18], Kamalov and Leung [20], Marin et al [21], Zeiler and Fergus [22] and Bishop [23] based on the analyses mentioned above. The use of a single dataset and two datasets of equal sizes to compare and draw conclusions regarding the efficacy of regularization algorithms has been noted as a drawback.…”
Section: Analysis Of Related Workmentioning
confidence: 92%
See 1 more Smart Citation
“…We found one limitation in Moradi et al [5], Srivastava et al [18], Kamalov and Leung [20], Marin et al [21], Zeiler and Fergus [22] and Bishop [23] based on the analyses mentioned above. The use of a single dataset and two datasets of equal sizes to compare and draw conclusions regarding the efficacy of regularization algorithms has been noted as a drawback.…”
Section: Analysis Of Related Workmentioning
confidence: 92%
“…Swastika et al [21] applied some regularization techniques to some deep-learning models for the detection of Malaria. The dataset used was a malaria dataset with 27588 images.…”
Section: Analysis Of Related Workmentioning
confidence: 99%
“…[81] VGG19 architecture with a logistic regression classifier Folio 96% 96% 99%, Flavia Swedish leaf datasets [82] AousethNet Mendeley dataset (MD2020 99% Bridelia ferruginea (6) Baphia nitida (7) Bidens pilosa ( 8) Blighia sapidia (9) Cassia alata (10) Clausena anisata (11) Citrus aurantifolia (12) Capparis erythrocarpus (13) Cnestis ferruginea (14) Cassia occidentalis (15) Chromolaena odorata (16) Carapa procera (17) Cryptolepis sanguinolente (18) Desmodium adscendens (19) Dialium guineense (20) Datura metel (21) Ficus asperifolia (22) Fleurya aestuans (23) Griffonia simplicifolia (24) Hoslundia Opposita (25) Kigelia africana (26) Khaya senegalensis (27) Lantana Camara (28) Momordica charantia (29) Mangifera indica (30) Morinda Lucida (31) Monodora myristica (32) Mondia whitei (33) Nauclea Latifolia (34) Newbouldia laevis (35) Ocimu gratissimum (36) Physalis angulata (37) Palisota hirsuta (38) Parquentina nigrescens (39) Phyllantus nururi (40) Plumbago zeylanica (41) Passiflora foetida (42) Ricinus communis (43) Rauwolfia vormitoria (44) Sida acuta (45) Synedrella nodiflora (46) Trema orientalis …”
Section: Log Gabor Filters David Field Proposed the Log-mentioning
confidence: 99%
“…DropConnect, dropout, data augmentation, stochastic pooling, batch normalization, weight decay, early stopping, and ℓ1 and ℓ2 regularization are some of thecommon regularization strategies used to prevent overfitting. [ 35 ].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Перенавчання відбувається, коли нейронна мережа стає надто вмілою у вивченні деталей і шуму, тобто будь-яких небажаних перешкод або перешкод, які впливають на якість, цілісність або надійність навчальних даних, що призводить до втрати її здатності узагальнювати нові, невидимі дані. Ці виклики підкреслюють делікатний баланс, необхідний у процесі навчання, наголошуючи на необхідності таких методів, як регуляризація та надійні методи перевірки, щоб знизити ризики, В той час в [3] були використані наступні методи регуляризації: − L1 (lasso regression) та L2 (ridge regression). Дані методи машинного навчання додають мінімізуючий параметр до функції втрат.…”
unclassified