Proceedings of the Symposium on Eye Tracking Research and Applications 2012
DOI: 10.1145/2168556.2168632
|View full text |Cite
|
Sign up to set email alerts
|

Typing with eye-gaze and tooth-clicks

Abstract: In eye-gaze-based human-computer interfaces, the most commonly used mechanism for generating activation commands (i.e., mouse clicks) is dwell time (DT). While DT can be relatively efficient and easy to use, it is also associated with the possibility of generating unintentional activation commands -an issue that is known as the Midas' touch problem. To address this problem, we proposed to use a "tooth-clicker" (TC) device as a mechanism for generating activation commands independently of the activity of the ey… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 26 publications
(11 citation statements)
references
References 18 publications
0
11
0
Order By: Relevance
“…Growing computational power makes it possible to analyze spoken words and even video recordings online. So speech recognition [13,14] or eye tracking [15][16][17][18] have emerged as alternatives. Eye tracking for text entry amounts to moving a mouse pointer across an on-screen keyboard.…”
Section: Related Workmentioning
confidence: 99%
“…Growing computational power makes it possible to analyze spoken words and even video recordings online. So speech recognition [13,14] or eye tracking [15][16][17][18] have emerged as alternatives. Eye tracking for text entry amounts to moving a mouse pointer across an on-screen keyboard.…”
Section: Related Workmentioning
confidence: 99%
“…Object acquisition refers to the movement of focus/cursor over the object to be selected and object selection means activation of a selection trigger when the focus/cursor comes over the desired object. Some of the techniques used for object acquisition are: eye gaze tracking [1], [2], face tracking [3], [4], facial feature tracking [5], scanning [6]- [9], and tongue movement [10]. Object selection can be performed by using key trigger [11], eye blinking [12], [13], dwell time trigger [14], [15], antisaccades, gaze gestures, on-off screen buttons, dashers, pEYEs [16], mouth opening click [17], tooth clicker [2], brows up clicker [8], EMG clicking [18], and clicking with smiling [19].…”
Section: Introductionmentioning
confidence: 99%
“…While interacting with a computer using an eye tracker, eye movements may be used to control the position of the pointer. Selections of targets on the screen can then be performed by blinking an eye (e.g., Ashtiani and MacKenzie [2010] and Tangsuksant et al [2012]), pushing a physical button (e.g., MacKenzie and Zhang [2008]), or moving muscles that can still be controlled (e.g., Zhao et al [2012]). However, these solutions for performing target selections do not suit all users and may cease to be viable options for many users because of their declining physical capabilities.…”
Section: Introductionmentioning
confidence: 99%