The 6th 2013 Biomedical Engineering International Conference 2013
DOI: 10.1109/bmeicon.2013.6687639
|View full text |Cite
|
Sign up to set email alerts
|

The optimal electromyography feature for oral muscle movements

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
1

Year Published

2015
2015
2021
2021

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 9 publications
0
3
1
Order By: Relevance
“…3. It is superior to previous results from the unimodal fusion, such as accuracy at 91% and 94% for classification of five oral activities from six sEMG channels [14] and classification of nine Thai syllables from five sEMG channels [15], respectively. In [13], the multimodal fusion between one acoustic signal and five sEMG channels is used to classify ten words based on phonemes.…”
Section: Discussioncontrasting
confidence: 56%
See 2 more Smart Citations
“…3. It is superior to previous results from the unimodal fusion, such as accuracy at 91% and 94% for classification of five oral activities from six sEMG channels [14] and classification of nine Thai syllables from five sEMG channels [15], respectively. In [13], the multimodal fusion between one acoustic signal and five sEMG channels is used to classify ten words based on phonemes.…”
Section: Discussioncontrasting
confidence: 56%
“…Features used in recognition of sEMG signals from the facial muscles can be determined based on their amplitude values, frequency contents, and statistical values. The popular amplitude based features include root-mean-squared value [11]- [14], mean absolute value, zero crossing [11], [14], [15], waveform length [14], [15], and slope sign change [14]. While frequency based features commonly extracted are Fast Fourier transform coefficients [11] and mean frequency [15], statistical based features are kurtosis [11], [15] and skewness [15].…”
Section: B Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…In fact, the obtained results show that this system can recognize laughter from other different actions with a global correct discrimination rate of at least 64%. These results, especially the ones obtained with the binary classifiers, compared with previous works in the field, using acoustic features [14]- [22], or other sEMG-based system s, [36], [44]- [49], are not particularly impressive. However, even if this EMG-based system is more invasive compared to video or audio systems, it is less invasive compared to previously developed sEMG systems, and it could be used either as standalone wearable system or as an auxiliary system to detect and classify laughter together with audio or video systems.…”
Section: Discussionmentioning
confidence: 54%