Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376317
|View full text |Cite
|
Sign up to set email alerts
|

TAGSwipe: Touch Assisted Gaze Swipe for Text Entry

Abstract: The conventional dwell-based methods for text entry by gaze are typically slow and uncomfortable. A swipe-based method that maps gaze path into words offers an alternative. However, it requires the user to explicitly indicate the beginning and ending of a word, which is typically achieved by tedious gazeonly selection. This paper introduces TAGSwipe, a bi-modal method that combines the simplicity of touch with the speed of gaze for swiping through a word. The result is an efficient and comfortable dwell-free t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(18 citation statements)
references
References 44 publications
0
18
0
Order By: Relevance
“…EyeSwipe can achieve a typing speed of 11.7 wpm after 30 minutes of typing. Kumar et al [37] presented ''TAGSwipe,'' a gaze and touchbased system mapping gaze path into words and can achieve a typing speed of 15.45 wpm.…”
Section: B Accessible Interactionsmentioning
confidence: 99%
See 1 more Smart Citation
“…EyeSwipe can achieve a typing speed of 11.7 wpm after 30 minutes of typing. Kumar et al [37] presented ''TAGSwipe,'' a gaze and touchbased system mapping gaze path into words and can achieve a typing speed of 15.45 wpm.…”
Section: B Accessible Interactionsmentioning
confidence: 99%
“…From all the related work discussed on gaze-assisted interactions, it can be observed that they use dwell-based activation, specific gaze gestures, or smooth pursuit movement of the gaze to trigger dedicated actions. Applications like ''TAGSwipe [37],'' gaze gesture-based authentication [10], [11], gaze gesture guiding system [31], and so on did primarily use gaze gestures to express a user's intended action; however, the gestures used and the recognition method remain application-specific. This lack of a generic gesture recognition framework motivated us to explore gaze gesture design strategies, recognition algorithms, and time and performance measures of various recognition algorithms.…”
Section: E Entertainmentmentioning
confidence: 99%
“…The naive solution to the Midas problem [2] is using dwell to indicate the intent of selection confirmation. Nevertheless, "gaze pointing + dwell confirmation" is slow and uncomfortable [5]. Sidenmark et al [19] proposed to utilize the distinction between gaze shift performed only by eyes and head-supported gaze shift, i.e., eye movement accompanied by head movement, to enable hover interaction, visual exploration around pre-selected targets, etc.…”
Section: Bi-modal Target Selection Involving Eye Gazementioning
confidence: 99%
“…To address the Midas problem, despite the naive solution dwell that is reported to be slow and uncomfortable [5], a secondary interaction modality for target selection confirmation (e.g., using two fingers to make an air-tap gesture, pressing a button on a device, etc.) is usually combined with eye gaze pointing.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, several commercial AR and VR HMDs have introduced integrated eye tracking functionality. Ahn et al [3] and Kumar et al [56] combined gaze and touch to input text and Rajanna et al [78] combined gaze and a button click. The additional touch modality is used to select a key, and speeding up eye-tracking for text entry, usually using dwell time over a key to confirm inputs.…”
Section: Gaze Based Text Entrymentioning
confidence: 99%