2018 25th IEEE International Conference on Image Processing (ICIP) 2018
DOI: 10.1109/icip.2018.8451560
|View full text |Cite
|
Sign up to set email alerts
|

Image Classification Using Convolutional Neural Networks and Kernel Extreme Learning Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 14 publications
0
13
0
Order By: Relevance
“…Since the ResNet50 [28] is the most widely used CNN architecture for food recognition problems, there are many results available for comparisons. So, the performance of the proposed two-scale CNN is compared with the previous methods [31]- [34] that are based on the ResNet50. ResNet-50 is a convolutional neural network with 50 layers and it has a fixed input size of 224 × 224.…”
Section: Methodsmentioning
confidence: 99%
“…Since the ResNet50 [28] is the most widely used CNN architecture for food recognition problems, there are many results available for comparisons. So, the performance of the proposed two-scale CNN is compared with the previous methods [31]- [34] that are based on the ResNet50. ResNet-50 is a convolutional neural network with 50 layers and it has a fixed input size of 224 × 224.…”
Section: Methodsmentioning
confidence: 99%
“…And it makes model to assign full probability to the ground truth label for each training sample, which potentially leads to overfit-ting. Therefore, they use other machine learning models (SVM [17], ELM [20], etc.) to replace it.…”
Section: Preclassificationmentioning
confidence: 99%
“…Each category has a minimum of 80 images. Following [20], we utilize 60 randomly selected images per class as the training dataset, the rest as the testing dataset. We resize all images to 256 × 256 pixels.…”
Section: Incomplete Label Rectified Labelmentioning
confidence: 99%
See 1 more Smart Citation
“…Stanford Dogs MIT Indoor 67 Method Acc (%) Method Acc (%) Method Acc (%) (Bossard, Guillaumin, and Van Gool 2014) 50.76 (Huang et al 2017) 78.30 (Milad and Subhasis 2016) 72.20 (Bossard, Guillaumin, and Van Gool 2014) 56.40 (Wei et al 2017) 78.86 (Dixit et al 2015) 72.86 (Meyers et al 2015) 79.00 (Chen and Zhang 2016) 79.50 (Lin, RoyChowdhury, and Maji 2018) 79.00 (Li et al 2018) 82.60 (Zhang et al 2016) 80.43 (Zhou et al 2018) 79.76 (Wei et al 2018) 85.70 (Dubey et al 2018) 83.75 (Yoo et al 2015) 80.78 (Guo et al 2018) 87.30 (Niu, Veeraraghavan, and Sabharwal 2018) 85.16 (Herranz, Jiang, and Li 2016) 80.97 (Hassannejad et al 2016) 88.28 (Krause et al 2016) 85.90 (Guo et al 2017) as D clean +f t), and previous works: Bottom-up , Pseudo-label (Lee 2013), Weakly (Joulin et al 2016), Boosting , PGM (Xiao et al 2015), WSL (Chen and Gupta 2015), Harnessing (Vo et al 2017), Goldfince (Krause et al 2016) that also employ and process web data for training CNN models. Different from the above methods which focus on data pre-processing, we optimize the model to learn from the web and standard data by reducing the influence of dataset gap.…”
Section: Food-101mentioning
confidence: 99%