Companion Publication of the 2020 International Conference on Multimodal Interaction 2020
DOI: 10.1145/3395035.3425656
|View full text |Cite
|
Sign up to set email alerts
|

Eating Sound Dataset for 20 Food Types and Sound Classification Using Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 14 publications
(21 reference statements)
0
3
0
Order By: Relevance
“…Audio features should not be neglected in investigating Computational Commensality, as eating is a highly multisensory experience (Spence, 2017). Fortunately, some data-sets are available in the literature focusing on audio features: the iHEARu-EAT database, for instance, features recordings from 30 subjects eating 6 different kinds of food (Hantke et al, 2016), whereas the Eating Sound data-set proposed by Ma et al (2020) includes audio from 20 different food types. Besides giving information on the kind of food that is consumed or on possible conversation topics, one could also exploit this modality to have a better picture of the commensal scenario.…”
Section: Related Workmentioning
confidence: 99%
“…Audio features should not be neglected in investigating Computational Commensality, as eating is a highly multisensory experience (Spence, 2017). Fortunately, some data-sets are available in the literature focusing on audio features: the iHEARu-EAT database, for instance, features recordings from 30 subjects eating 6 different kinds of food (Hantke et al, 2016), whereas the Eating Sound data-set proposed by Ma et al (2020) includes audio from 20 different food types. Besides giving information on the kind of food that is consumed or on possible conversation topics, one could also exploit this modality to have a better picture of the commensal scenario.…”
Section: Related Workmentioning
confidence: 99%
“…They developed a simple fully connected deep neural network to classify eating sounds of 20 different types of food. (Ma et al, 2020) The best model won using the MFCC as the input, which brought a 90% accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…They developed a simple fully connected deep neural network to classify eating sounds of 20 different types of food. (Ma, Gómez Maureira, & van Rijn, 2020) The best model won using the MFCC as the input, which brought a 90% accuracy.…”
Section: Introductionmentioning
confidence: 99%