2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.180
|View full text |Cite
|
Sign up to set email alerts
|

Diabetes60 — Inferring Bread Units From Food Images Using Fully Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(19 citation statements)
references
References 27 publications
0
19
0
Order By: Relevance
“…Evidently, reducing the number of images required to be taken by users would be ergonomically ideal, but this can lead to inaccuracies in quantifying volume due to occlusions that are not visible in a single-view image [108][109][110]120]. To circumvent this issue, researchers have extended the use of depth sensing technology to integrate the aspect of deep learning, allowing 3D depth maps of food objects to be predicted based on visible surfaces [108][109][110]120]. This model works on the assumption that sufficient images and training will allow the system to understand the context of the scene, allowing for camera viewing angles, depth value of food objects, and occluded regions to be extrapolated from the images [108][109][110]120].…”
Section: Depth Mappingmentioning
confidence: 99%
See 3 more Smart Citations
“…Evidently, reducing the number of images required to be taken by users would be ergonomically ideal, but this can lead to inaccuracies in quantifying volume due to occlusions that are not visible in a single-view image [108][109][110]120]. To circumvent this issue, researchers have extended the use of depth sensing technology to integrate the aspect of deep learning, allowing 3D depth maps of food objects to be predicted based on visible surfaces [108][109][110]120]. This model works on the assumption that sufficient images and training will allow the system to understand the context of the scene, allowing for camera viewing angles, depth value of food objects, and occluded regions to be extrapolated from the images [108][109][110]120].…”
Section: Depth Mappingmentioning
confidence: 99%
“…To circumvent this issue, researchers have extended the use of depth sensing technology to integrate the aspect of deep learning, allowing 3D depth maps of food objects to be predicted based on visible surfaces [108][109][110]120]. This model works on the assumption that sufficient images and training will allow the system to understand the context of the scene, allowing for camera viewing angles, depth value of food objects, and occluded regions to be extrapolated from the images [108][109][110]120]. Unlike previous applications of deep learning in food volume estimation that relied on relative estimations, this provides an avenue for more absolute volume calculations to be performed.…”
Section: Depth Mappingmentioning
confidence: 99%
See 2 more Smart Citations
“…A first attempt has been reported by Allegra et al [26], which applied a SegNet-based [27] CNN architecture for single food image depth prediction. Recently, Christ et al [28] proposed a CNN architecture with skip connections for food image depth prediction.…”
Section: Depth Predictionmentioning
confidence: 99%