2022
DOI: 10.1109/access.2022.3152224
|View full text |Cite
|
Sign up to set email alerts
|

Cross Validation Voting for Improving CNN Classification in Grocery Products

Abstract: The development of deep neural networks that has been carried out in recent years allows solving highly complex computer vision classification problems. Often, although the results obtained with these classifiers are high, there are certain sectors that seek greater accuracy from these systems. Increasing the accuracy of neural networks can be achieved through ensemble learning, which combines different classifiers with the aim of selecting a winner based on different criteria about them. These techniques have… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…The model structure can be referred to Figure 14 and Table 15. To further improve the classification accuracy on the CIFAR-10 dataset, we employed cross-validation ensemble for further enhancement, based on the experimental results and experience of J. D. Domingo et al 35 . We adopted a five-fold cross-validation ensemble, and the model structure is shown in Figure 15.…”
Section: Experiments Descriptionmentioning
confidence: 99%
“…The model structure can be referred to Figure 14 and Table 15. To further improve the classification accuracy on the CIFAR-10 dataset, we employed cross-validation ensemble for further enhancement, based on the experimental results and experience of J. D. Domingo et al 35 . We adopted a five-fold cross-validation ensemble, and the model structure is shown in Figure 15.…”
Section: Experiments Descriptionmentioning
confidence: 99%
“…Then, the part presenting the smaller number of windows is kept, while windows of the other part are randomly discarded until the sizes of both groups (DER and non-DER windows) are matched. This creates an even input data (50% DER -50% non-DER), which contributes to training the machine learning algorithms for a more general scenario and reduces the likelihood of overfitting [34].…”
Section: Data Processingmentioning
confidence: 99%
“…During training, 80% of the windows created from the input matrix X and its correspondent element in the output vector y are used to create a model f (x). The model f (x) is then validated using 20% of unseen data from the input matrix and the predictions are compared with the expected values from the vector y at each timestep t. To provide a comprehensive evaluation of the NILM method proposed, cross-validation with a 5 k-fold is used to reduce possibility of overfitting [34].…”
Section: F Nilm Model Assessmentmentioning
confidence: 99%
“…The cross-validation mechanism extends the training procedure by one more step. [8]. It divides the original sample set into six parts.…”
Section: Preprocess the Datamentioning
confidence: 99%