CHI Conference on Human Factors in Computing Systems 2022
DOI: 10.1145/3491102.3517472
|View full text |Cite
|
Sign up to set email alerts
|

Select or Suggest? Reinforcement Learning-based Method for High-Accuracy Target Selection on Touchscreens

Abstract: Suggesting multiple target candidates based on touch input is a possible option for high-accuracy target selection on small touchscreen devices. But it can become overwhelming if suggestions are triggered too often. To address this, we propose SATS, a Suggestionbased Accurate Target Selection method, where target selection is formulated as a sequential decision problem. The objective is to maximize the utility: the negative time cost for the entire target selection procedure. The SATS decision process is dicta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 51 publications
0
2
0
Order By: Relevance
“…After taking an action, the agent immediately receives a reward from the environment, set as a negative value of the expected time cost for each action [30]:…”
Section: Creating a Reward Modelmentioning
confidence: 99%
“…After taking an action, the agent immediately receives a reward from the environment, set as a negative value of the expected time cost for each action [30]:…”
Section: Creating a Reward Modelmentioning
confidence: 99%
“…Instead of adopting practical menu items such as "Personal Hotspot" in phone settings, the names of animals and fruits are easier for the participants to recognize and distinguish from other items. Following previous works in menu selection [15,47], smartphone APP launching [50], and target selection [3,18,45,64], we generated number of occurrences of each item from the Zipf distribution:…”
Section: Experiments Designmentioning
confidence: 99%
“…Sensor Feature Modality Leverage Application [40] Eye Response ERICA gaze gesture -E interface control [48] external IR cam dwell & gesture -E target selection [145] Tobii X60 dwell -I attention analysis [206] modified prototype gaze gesture -E interface control [193] Tobii EyeX dwell touch I interface control [224] phone camera gaze basic events -E user authentication [125] Facelab 5 gaze basic events -I Website usability test [150] phone camera dwell -I intention inference [120] phone camera gaze gesture touch E user authentication [261] Tobii eyeX gaze basic events touch I gaze adaptive UI [267] phone camera gaze gesture -E gaze input [121] external RGB cam gaze gesture touch E user authentication [227] phone camera gaze basic events -I attention inference [219] Tobii 4C dwell touch E text editing aids [202] Tobii 4C dwell touch E text interface control [167] phone camera dwell voice E map navigation [220] phone camera eye image -I ocular exam [239] phone camera dwell touch E cross-device control [58] phone camera dwell eyelid E interface control [112] phone camera gaze basic events -I attention analysis [254] phone camera gaze gesture touch E gaze-assist input [133] phone camera dwell hand motion E interface control [180] Tobii X2 gaze basic events -I attention analysis [153] phone camera dwell -E target selection [123] Tobii 4C dwell voice I implicit note-taking [122] external RGB cam gaze gesture touch E user authentication [274] phone camera dwell voice E text correction [10] external RGB cam gaze & face -I user privacy [266] phone camera eye image -I holding posture detection [275] external RGB cam gaze basic events * E gaze command definition [103] Tobii X2 gaze basic events -I attention analysis …”
Section: Project Yearmentioning
confidence: 99%