2015 IEEE International Symposium on Multimedia (ISM) 2015
DOI: 10.1109/ism.2015.67
|View full text |Cite
|
Sign up to set email alerts
|

Single-View Food Portion Estimation Based on Geometric Models

Abstract: In this paper we present a food portion estimation technique based on a single-view food image used for the estimation of the amount of energy (in kilocalories) consumed at a meal. Unlike previous methods we have developed, the new technique is capable of estimating food portion without manual tuning of parameters. Although single-view 3D scene reconstruction is in general an ill-posed problem, the use of geometric models such as the shape of a container can help to partially recover 3D parameters of food item… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
70
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 68 publications
(72 citation statements)
references
References 18 publications
1
70
0
1
Order By: Relevance
“…Ptomey et al (17) analysed whether digital images are a feasible method to improve estimation of energy and macronutrient intake of proxy-assisted 3-d dietary records among adolescents diagnosed with intellectual and developmental disabilities. Twenty adolescents aged between 11 and 18 years with mild (intelligence quotient 50-69) to moderate (intelligence quotient [35][36][37][38][39][40][41][42][43][44][45][46][47][48][49] intellectual and developmental disabilities were given a tablet (iPad 2, Apple, Cupertino, CA, USA) to capture images of all food items consumed over three consecutive days of two weekdays and one weekend day. The participants were instructed to take before and after images by themselves and place a 5 × 5 cm 2 checkered fiduciary marker in the image.…”
Section: Summary Of Image-assisted Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…Ptomey et al (17) analysed whether digital images are a feasible method to improve estimation of energy and macronutrient intake of proxy-assisted 3-d dietary records among adolescents diagnosed with intellectual and developmental disabilities. Twenty adolescents aged between 11 and 18 years with mild (intelligence quotient 50-69) to moderate (intelligence quotient [35][36][37][38][39][40][41][42][43][44][45][46][47][48][49] intellectual and developmental disabilities were given a tablet (iPad 2, Apple, Cupertino, CA, USA) to capture images of all food items consumed over three consecutive days of two weekdays and one weekend day. The participants were instructed to take before and after images by themselves and place a 5 × 5 cm 2 checkered fiduciary marker in the image.…”
Section: Summary Of Image-assisted Approachesmentioning
confidence: 99%
“…The portion size estimation relies on the estimated volume of the identified food item. The volume estimation is performed based on three-dimensional reconstruction of the food item from the image as described previously (36)(37)(38)(39) . Images captured over a 24-h day by adolescents (n 15) were used to assess the error of automated determination of food weights compared with the known weights (38) .…”
Section: Automated Food Identification and Portion Size Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Each food item in the image is segmented, identified [49], and its volume is estimated [16]. From this information, the energy and nutrient intake can be determined using a food composition table [27].…”
Section: Overall System Architecturementioning
confidence: 99%
“…These results from the “before” image are sent back to the mFR (step 3) where the user reviews and confirms the food labels (step 4). In step 5, the server receives the information back from the user and uses these results for final image analysis refinement and volume estimation [16]. Step 6 consists of structuring the data generated in the previous steps by forming object descriptions (e.g.…”
Section: Overall System Architecturementioning
confidence: 99%