26th International Conference on Intelligent User Interfaces 2021
DOI: 10.1145/3397481.3450645
|View full text |Cite
|
Sign up to set email alerts
|

TeethTap: Recognizing Discrete Teeth Gestures Using Motion and Acoustic Sensing on an Earpiece

Abstract: Teeth gestures become an alternative input modality for different situations and accessibility purposes. In this paper, we present TeethTap, a novel eyes-free and hands-free input technique, which can recognize up to 13 discrete teeth tapping gestures. TeethTap adopts a wearable 3D printed earpiece with an IMU sensor and a contact microphone behind both ears, which works in tandem to detect jaw movement and sound data, respectively. TeethTap uses a support vector machine to classify gestures from noise by fusi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 32 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…For example, when interacting with a map view to find the best local dining option, a user may frequently pan around and zoom (in and out) to view different restaurants, and both the duration of stay on a particular restaurant and how many times it is viewed back and forth could be leveraged to approximate the user's interest and investment of effort. One way to address these concerns is to leverage a more diverse set of behavioral signals and potentially signal combinations, such as scrolling, mouse panning, zooming, eye tracking [35,36,89,90], and facial gestures tracking [70,117] to collect a more accurate picture of what users are seeing on screen. Another future direction that could be fruitful is to take a machine learning approach instead of the current rule-based approach for approximating content importance using behavioral signals.…”
Section: Future Workmentioning
confidence: 99%
“…For example, when interacting with a map view to find the best local dining option, a user may frequently pan around and zoom (in and out) to view different restaurants, and both the duration of stay on a particular restaurant and how many times it is viewed back and forth could be leveraged to approximate the user's interest and investment of effort. One way to address these concerns is to leverage a more diverse set of behavioral signals and potentially signal combinations, such as scrolling, mouse panning, zooming, eye tracking [35,36,89,90], and facial gestures tracking [70,117] to collect a more accurate picture of what users are seeing on screen. Another future direction that could be fruitful is to take a machine learning approach instead of the current rule-based approach for approximating content importance using behavioral signals.…”
Section: Future Workmentioning
confidence: 99%
“…4). One direct solution to make lips more visible to the DHH viewers is to have the speaker equip a wearable camera [31,40] or explore earable sensing systems [66] that track the lip movement. However, such an approach would be very cumbersome and costly to individual content creators.…”
Section: 51mentioning
confidence: 99%
“…Eye gesture input is the most frequently used method for hands-free input such as eye gesture recognition [31][32][33] and gaze interaction (Orbits [6], GazeTap [15]). And mouth-related interface (Whoosh [38], TieLent [20]) such as tongue interface [11,24] and teeth interface (TeethTap [43],Bitey [2], EarSense [37]) also offers users a novel way for hands-free input. What's more, ear-based interaction (EarRumble [39]), waist gestures (HulaMove [48]) and foot gestures (FootUI [17], FEETICHE [26]) are also explored by researchers.…”
Section: Hands-free Gesture Inputmentioning
confidence: 99%