2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP) 2021
DOI: 10.1109/mmsp53017.2021.9733557
|View full text |Cite
|
Sign up to set email alerts
|

Towards Learning Food Portion From Monocular Images With Cross-Domain Feature Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 17 publications
0
18
0
Order By: Relevance
“…We used the same training method as described in [12] where the RGB image serves as the input to the generative model which outputs a generated Energy Density Map.…”
Section: B Energy Density Mapmentioning
confidence: 99%
See 3 more Smart Citations
“…We used the same training method as described in [12] where the RGB image serves as the input to the generative model which outputs a generated Energy Density Map.…”
Section: B Energy Density Mapmentioning
confidence: 99%
“…Both use the VGG-16 [34] network as their backbone. Previously, VGG-16 was used as the backbone to extract features from the Energy Density Map [12] and Resnet-50 was used as the RGB feature extractor [35]. Since extracting features from a single channel depth map is easier than extracting features from an RGB image, VGG-16 is sufficient in this case.…”
Section: Food Energy Estimationmentioning
confidence: 99%
See 2 more Smart Citations
“…Classification of food images is typically the first and most fundamental step in automated image-based food analysis [8,10]. Most existing works focus on designing methods to improve the accuracy of food classification using static food image datasets [16][17][18][19][20]23]. However, static datasets such as Food-101 [2] or VireoFood-172 [4] are limited to training fixed classifiers, which may not be suitable for real-life scenarios because each person has their unique food consumption patterns.…”
Section: Introductionmentioning
confidence: 99%