Abstract-While mid-air gestures offer new possibilities to interact with or around devices, some situations, such as interacting with applications, playing games or navigating, may require visual attention to be focused on a main task. Ultrasonic haptic feedback can provide 3D spatial haptic cues that do not demand visual attention for these contexts. In this paper, we present an initial study of active exploration of ultrasonic haptic virtual points that investigates the spatial localization with and without the use of the visual modality. Our results show that, when providing haptic feedback giving the location of a widget, users perform 50% more accurately compared to providing visual feedback alone. When provided with a haptic location of a widget alone, users are more than 30% more accurate than when given a visual location. When aware of the location of the haptic feedback, active exploration decreased the minimum recommended widget size from 2cm 2 to 1cm 2 when compared to passive exploration from previous studies. Our results will allow designers to create better mid-air interactions using this new form of haptic feedback.
I.! INTRODUCTIONSophisticated and affordable sensors have lead many to consider touchless gestural interaction in new application contexts such as desktop computers, gaming, interactive tabletops or inside cars [12,14,17]. In addition to simple tasks such as pan and zoom or switching programs, gestures above the keyboard allow users to manipulate complex widgets without the burden of using another device, such as a mouse. For example, it is possible to enable a colour picker widget with a key on the keyboard and control the wheel with mid-air gestures [17]. Mid-air gestures also provide rich interaction techniques to execute sophisticated commands and give six degrees of freedom with which to manipulate digital content [12]. However, a key limitation with the research and products in this area is that they provide no haptic feedback; users can gesture but they cannot feel the controls they are interacting with.Gestures above the keyboard can limit the increasing complexity of keyboards by addressing special functionalities such as media controls keys, numeric keypads, specific or additional keyboard layouts, shortcuts, or even specific input controllers like sliders or dials. For example, introducing layers of configurable virtual input controllers in the space around the keyboard would enable a rich set of complementary controllers for secondary or specific tasks (Figure 1). However, this requires the ability to locate such controllers * dong-bach.vo@glasgow.ac.uk † stephen.brewster@glasgow.ac.uk easily and without decreasing user performance by cluttering the display with visual information about where they are.