2018
DOI: 10.1007/978-3-319-94229-2_9
|View full text |Cite
|
Sign up to set email alerts
|

Multi-view Model Contour Matching Based Food Volume Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 1 publication
0
3
0
Order By: Relevance
“…The dimensions (length, width, height) of the food item are calculated first, using the reference object, and then the volume is calculated. The references may include thumbs, hands or random objects [84]- [88].…”
Section: B) Multi-view Methodsmentioning
confidence: 99%
“…The dimensions (length, width, height) of the food item are calculated first, using the reference object, and then the volume is calculated. The references may include thumbs, hands or random objects [84]- [88].…”
Section: B) Multi-view Methodsmentioning
confidence: 99%
“…Multi-view reconstruction approaches [24], [25] largely rely on on traditional computer vision techniques. [26] estimated the contour of the food from three different views and matched it with a predefined library to estimate the volume. However, their library used only nine types of food with over-simplified shapes.…”
Section: 𝐪𝐪 𝐑𝐑mentioning
confidence: 99%
“…Even a more recent work [7] using two views uses only traditional computer vision techniques. [22] uses a simple method, mapping contour, but the performance might come from an easy dataset. Although [11] uses deep learning in their work, it's only for object detection and they use a simple method to calculate volume.…”
Section: Related Workmentioning
confidence: 99%