2011
DOI: 10.2147/ijn.s26619
|View full text |Cite
|
Sign up to set email alerts
|

Human facial neural activities and gesture recognition for machine-interfacing applications

Abstract: The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human–machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is sugg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 30 publications
1
11
0
Order By: Relevance
“…It must be mentioned that, comparing the overall performance of the previous works with the results of this paper was not fair since the number of classes as well as the participants, signal recording protocol and the considered facial gestures were not the same. When comparing with [23] in which a similar setup was considered, it should be noticed that despite the lower accuracy (about 3%) achieved by VEBFNN, this classifier was considerably faster than FCM.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…It must be mentioned that, comparing the overall performance of the previous works with the results of this paper was not fair since the number of classes as well as the participants, signal recording protocol and the considered facial gestures were not the same. When comparing with [23] in which a similar setup was considered, it should be noticed that despite the lower accuracy (about 3%) achieved by VEBFNN, this classifier was considerably faster than FCM.…”
Section: Resultsmentioning
confidence: 99%
“…Since there is not much work reported on facial EMG analysis, this paper considers the same setup used in [23] to investigate more on the impact of different facial EMG features on the classification of facial gestures. Therefore, characteristics of ten facial gestures EMGs were explored by extracting ten different time-domain features.…”
Section: Introductionmentioning
confidence: 99%
“…Their general aim has been to categorise phonemes, words, articulatory features, or gestures by facial and tongue EMG signals [7–14]. Honda et al [15] and Lucero & Munhall [16] have both published on predicting lip shapes.…”
Section: Introductionmentioning
confidence: 99%
“…For each analysis window of each movement pattern ( Mi, i = 1, 2, …, 5, listed in Table 1 but excluding “no movement”), the root mean square (RMS) amplitude (24, 25) and the 4th-order autoregressive (AR) model coefficients (18) of each channel were calculated as features. One of the AR model coefficients that is constant was removed, so that there was F = 5 features from each channel each analysis window.…”
Section: Methodsmentioning
confidence: 99%