2021
DOI: 10.1145/3448105
|View full text |Cite
|
Sign up to set email alerts
|

Acoustic-based Upper Facial Action Recognition for Smart Eyewear

Abstract: Smart eyewear (e.g., AR glasses) is considered to be the next big breakthrough for wearable devices. The interaction of state-of-the-art smart eyewear mostly relies on the touchpad which is obtrusive and not user-friendly. In this work, we propose a novel acoustic-based upper facial action (UFA) recognition system that serves as a hands-free interaction mechanism for smart eyewear. The proposed system is a glass-mounted acoustic sensing system with several pairs of commercial speakers and microphones to sense … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 43 publications
0
10
0
Order By: Relevance
“…Prior work [6,34,72,82] have demonstrated that partial skin and muscle deformations behind and inside ears are highly informative to reconstruct full facial expressions when they are captured by different kinds of sensors. Xie et al [74] proved that the skin and muscle deformations around the eyes and the cheeks contain information that can be extracted to recognize upper facial gestures. The sensing hypothesis of EyeEcho is that these deformations around eyes and cheeks are highly informative about detailed facial movements on both lower and upper face including eyes, eyebrows, cheeks, and mouth.…”
Section: Principle Of Operationmentioning
confidence: 99%
See 2 more Smart Citations
“…Prior work [6,34,72,82] have demonstrated that partial skin and muscle deformations behind and inside ears are highly informative to reconstruct full facial expressions when they are captured by different kinds of sensors. Xie et al [74] proved that the skin and muscle deformations around the eyes and the cheeks contain information that can be extracted to recognize upper facial gestures. The sensing hypothesis of EyeEcho is that these deformations around eyes and cheeks are highly informative about detailed facial movements on both lower and upper face including eyes, eyebrows, cheeks, and mouth.…”
Section: Principle Of Operationmentioning
confidence: 99%
“…To explore these research questions, we developed EyeEcho using active acoustic sensing to capture the skin deformations on glasses, which we will detail later. We chose acoustic sensing because its sensors are small, lightweight, low-power and have been successfully applied to various tasks on tracking human activities, including health-related activities detection [45,65], novel interaction methods [3,66,77,78], silent speech recognition [61,80,81,83], authentication [11,18,23,40,67], discrete facial expression recognition [74], gaze tracking [33], finger tracking [46,60], hand gesture recognition [31,79], body pose estimation [41], and motion tracking [35,84].…”
Section: Principle Of Operationmentioning
confidence: 99%
See 1 more Smart Citation
“…Smart glasses, coupled with microphones and speakers, have also been used for acoustic sensing. In [49], smart glasses are used to capture facial muscle movements and classify facial actions like cheek and brow raise, winks, etc.…”
Section: Related Workmentioning
confidence: 99%
“…The wearable community has explored facial expression monitoring with different sensing modalities embedded in accessories. The various sensing methods include cameras [16,35], IMU [31,20], light [22], capacitive [39], piezoelectric [40], electromyography (EMG) [41], mechanomyography [28,2], audio based solutions [2,33,42], and many others. Table 1 shows a comparison of the state-of-the-art approaches.…”
Section: Facial Monitoring With Wearablesmentioning
confidence: 99%