Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services 2014
DOI: 10.1145/2594368.2594386
|View full text |Cite
|
Sign up to set email alerts
|

BodyBeat

Abstract: In this paper, we propose BodyBeat, a novel mobile sensing system for capturing and recognizing a diverse range of non-speech body sounds in real-life scenarios. Non-speech body sounds, such as sounds of food intake, breath, laughter, and cough contain invaluable information about our dietary behavior, respiratory physiology, and affect. The BodyBeat mobile sensing system consists of a custom-built piezoelectric microphone and a distributed computational framework that utilizes an ARM microcontroller and an An… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
4
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 137 publications
(15 citation statements)
references
References 21 publications
0
14
0
Order By: Relevance
“…Eating and drinking may also produce idiosyncratic sounds through chewing and swallowing. A microphone attached at the neck can classify sounds produced by eating and drinking with reasonable accuracy (Kalantarian et al 2015, Rahman et al 2014, Yatani & Truong 2012).…”
Section: Review Of Personal Sensing Researchmentioning
confidence: 99%
“…Eating and drinking may also produce idiosyncratic sounds through chewing and swallowing. A microphone attached at the neck can classify sounds produced by eating and drinking with reasonable accuracy (Kalantarian et al 2015, Rahman et al 2014, Yatani & Truong 2012).…”
Section: Review Of Personal Sensing Researchmentioning
confidence: 99%
“…In [10], Rahman et al present BodyBeat: a robust system for detecting human sounds. A similar work is presented by Yatani et al in [11].…”
Section: Related Workmentioning
confidence: 99%
“…First, we filter the audio stream using an exponentially weighted moving average filter and scale the resulting data to the unit norm (L 2 normalization). To remove the background noise, we perform loudness normalization, which eliminates bias due to the variation of perceived loudness across different sound frames in each audio window [27]. These procedures remove most of the background noises.…”
Section: A-vowel Detection Modelmentioning
confidence: 99%