Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management 2016
DOI: 10.1145/2986035.2986039
|View full text |Cite
|
Sign up to set email alerts
|

Food/Non-food Image Classification and Food Categorization using Pre-Trained GoogLeNet Model

Abstract: Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition usin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
101
1

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 169 publications
(103 citation statements)
references
References 33 publications
1
101
1
Order By: Relevance
“…An improvement of about 4% is achieved in terms of overall accuracy using a method based on CNN [14]. From this, numerous researchers have proposed models based on CNNs either for feature extraction [2], [3] or for the whole recognition process [1], [4]. The best results obtained on public datasets with more than 15.000 images [1], [2] have been reported in [3] through the combination of CNN GoogLeNet for feature extraction, PCA for dimensionality reduction and SVM for classification.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…An improvement of about 4% is achieved in terms of overall accuracy using a method based on CNN [14]. From this, numerous researchers have proposed models based on CNNs either for feature extraction [2], [3] or for the whole recognition process [1], [4]. The best results obtained on public datasets with more than 15.000 images [1], [2] have been reported in [3] through the combination of CNN GoogLeNet for feature extraction, PCA for dimensionality reduction and SVM for classification.…”
Section: Related Workmentioning
confidence: 99%
“…The latter is quite relevant because remove the need for a manual selection of the chosen dishes, allowing to speed-up the service offered by these restaurants. From the computer vision side, several approaches have been proposed in order to tackle the problem, most of them using Convolutional Neural Networks (CNNs) [1], [2], [3], [4]. Several of the published work consider the development of methods for food recognition, that is, being able to recognize the dish depicted in a picture in which a single plate is shown.…”
Section: Introductionmentioning
confidence: 99%
“…the authors were and are highly involved into several multimedia related international events in the field of the workshop: Measuring Behaviour 2016, PervasiveHealth 2014, SenseCam Symposium 2010, Beyond QuantifiedSelf at CHI 2014, among others. The organizers are working in the different fields of multimedia in health such as tracking of nutrition [1], digital interventions for personal health [2], activity recognition from Smart phones [4] and in lifelogging [3] using statistical methods, from multimodal heterogeneous data [4] and interpretation for personal health [6].…”
Section: Organizersmentioning
confidence: 99%
“…Table 5 shows an overview of current performance comparison on benchmark datasets. [Anthimopoulos et al 2014] SIFT, Color -Food recognition [Oliveira et al 2014] Color, Texture -Mobile food recognition [Kawano and Yanai 2014c] HoG, Color -Mobile food recognition [Farinella et al 2015a] SIFT, Texture, Color -Food recognition [Martinel et al 2015] Color, Shape, Texture -Food recognition [Bettadapura et al 2015] SIFT, Color Location & Menu Restaurant-specific food recognition [Farinella et al 2015b] SIFT, SPIN -Food recognition SIFT, Color, HoG -Mobile food recognition [Ravl et al 2015] HoG, Texture, Color -Mobile food recognition [Martinel et al 2016] SIFT, Color, Shape, Texture -Food recognition [He et al 2017] Texture -Food recognition SIFT, Color -Food recognition Manuscript submitted to ACM Kawano and Yanai 2014b] HoG, Color, CNN -Food recognition [Simonyan and Zisserman 2014] VGG -Food recognition [Kagaya et al 2014] AlexNet -Food recognition [Ao and Ling 2015] GoogleNet -Food recognition AlexNet -Food recognition [Christodoulidis et al 2015] CNN -Food recognition VGG Text Recipe recognition DeCAF Location Restaurant-specific food recognition DeCAF Location Restaurant-specific food recognition [Herruzo et al 2016] GoogleNet -Food recognition [Wang et al 2016] CNN Location Restaurant-specific food recognition [Singla et al 2016] GoogleNet -Food recognition [Ragusa et al 2016] AlexNet, VGG, NIN -Food recognition GoogleNet -Food recognition [Ciocca et al 2016] AlexNet -Food recognition [Liu et al 2016] Inception -Food recognition [Hassannejad et al 2016] Inception -Food recognition [Tanno et al 2016 [Martinel et al 2018] WISeR -Food recognition…”
Section: Mobile Food Recognition the Possibility Of Introducing Smarmentioning
confidence: 99%
“…Compared with hand-crafted features, an improvement is achieved via CNN based deep networks[Kagaya et al 2014]. Numerous researchers have proposed CNN based models either for feature extraction[Aguilar et al 2017a;Ragusa et al 2016] or for the whole recognition process[Kagaya and Aizawa 2015;Singla et al 2016]. For example,[Singla et al 2016] reported the experiments on food/non-food classification using the GoogLeNet network.…”
mentioning
confidence: 99%