2019
DOI: 10.1007/978-3-030-11024-6_45
|View full text |Cite
|
Sign up to set email alerts
|

Where and What Am I Eating? Image-Based Food Menu Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Besides that, In the research conducted by Bolaños, Valdivia, & Radeva (2018), they propose to explore the problem of image-based food menu recognition. The problem here is if given an image, to ascertain the specific menu item corresponding to the restaurant where it was taken to be able to match the image to the menu item, it would be easier to find the exact nutritional information of the food or any other data stored by the restaurant owners.…”
Section: Existing Researchmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides that, In the research conducted by Bolaños, Valdivia, & Radeva (2018), they propose to explore the problem of image-based food menu recognition. The problem here is if given an image, to ascertain the specific menu item corresponding to the restaurant where it was taken to be able to match the image to the menu item, it would be easier to find the exact nutritional information of the food or any other data stored by the restaurant owners.…”
Section: Existing Researchmentioning
confidence: 99%
“…On the other hand, the downside is that it does not take into account the handling of dishes with foreign names which cannot be easily learned by their language model. The conclusion of the study is that it is possible to build a food restaurant menu recognition model for any restaurant, without the need to have a separate model for each restaurant or restaurant pair (Bolaños, Valdivia, & Radeva, 2018).…”
Section: Existing Researchmentioning
confidence: 99%
“…[Aguilar et al 2018] conducted automatic food tray analysis in canteens and restaurants by food detection and segmentation based on CNNs for smart restaurants. [Bolaños et al 2018] combined CNNs and Recurrent Neural Networks (RNNs) to determine its correct menu item corresponding to the restaurant given an image as the input. Table 3 and Table 4 provides an overview of these approaches with respect to visual features, additional information and recognition type.…”
Section: Mobile Food Recognition the Possibility Of Introducing Smarmentioning
confidence: 99%
“…The goal of egocentric vision is to analyze the visual information provided by wearable cameras, which have the capability to acquire images from a first person point-of-view. The analysis of these images provides information about the behavior of the user, useful for several complementary topics like social interactions (Aghaei et al, 2018), scene understanding (Singh et al, 2016), time-space-based localization (Yao et al, 2018), action (Fathi et al, 2011;Possas et al, 2018) or activity recognition (Iwashita et al, 2014;Cartas et al, 2017), or nutritional habits analysis (Bolaños et al, 2018b), among others. Thus, enabling us to understand the whole story and behavior of the users behind the pictures (i.e.…”
Section: Captioning Visual Contentmentioning
confidence: 99%