This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantly.
This paper investigates how perceivable haptic feedback patterns are using an actuated surface on a steering wheel. Six solenoids were embedded along the surface of the wheel, creating three bumps under each palm. The solenoids can be used to create a range of different tactile patterns. As a result of the design recommendation by Gallace et al.[11] maximally four of the six solenoids were actuated simultaneously, resulting in 57 patterns to test. A simulated driving study was conducted to investigate (1) the optimal number of actuated solenoids and (2) the most perceivable haptic patterns. A relationship between number of actuated solenoids and pattern identification rate was established. Perception accuracy drops above three active solenoids. Haptic patterns mirrored symmetrically on both hands were perceived more accurately. Practical applications for displaying tactile messages on the steering wheel are e.g. dead angles, upcoming road conditions, navigation information (i.e. conveying information discretely to the driver).
A third of global greenhouse gas (GHG) emissions are attributable to the food sector, however dietary change could reduce this by 49%. Many people intend to make eco-friendly food choices, but fail to do so at the point-of-purchase. Educating consumers on the environmental impact of their choices during their shop may be a powerful approach to tackling climate change. This paper presents the theory-and evidence-based development of Envirofy: the first eco-friendly e-commerce grocery tool for real shoppers. We share how we used the Behaviour Change Wheel (BCW) and multidisciplinary evidence to maximise the likely effectiveness of Envirofy. We conclude with a discussion of how the HCI community can help to develop and evaluate real-time tools to close intentionbehaviour gaps and ultimately reduce GHG emissions. CCS CONCEPTS• Human-centered computing → Web-based interaction; Interface design prototyping; Collaborative and social computing devices.
Infotainment Systems can increase mental workload and divert visual attention away from looking ahead on the roads. When these systems give information to the driver, provide it through the tactile channel on the steering, it wheel might improve driving behaviour and safety. This paper describes an investigation into the perceivability of haptic feedback patterns using an actuated surface on a steering wheel. Six solenoids were embedded along the rim of the steering wheel creating three bumps under each palm. Maximally, four of the six solenoids were actuated simultaneously, resulting in 56 patterns to test. Participants were asked to keep in the middle road of the driving simulator as good as possible. Overall recognition accuracy of the haptic patterns was 81.3%, where identification rate increased with decreasing number of active solenoids (up to 92.2% for a single solenoid). There was no significant increase in lane deviation or steering angle during haptic pattern presentation. These results suggest that drivers can reliably distinguish between cutaneous patterns presented on the steering wheel. Our findings can assist in delivering non-critical messages to the driver (e.g. driving performance, incoming text messages, etc.) without decreasing driving performance or increasing perceived mental workload.
Segmenting audio into homogeneous sections such as music and speech helps us understand the content of audio. It is useful as a preprocessing step to index, store, and modify audio recordings, radio broadcasts and TV programmes. Deep learning models for segmentation are generally trained on copyrighted material, which cannot be shared. Annotating these datasets is time-consuming and expensive and therefore, it significantly slows down research progress. In this study, we present a novel procedure that artificially synthesises data that resembles radio signals. We replicate the workflow of a radio DJ in mixing audio and investigate parameters like fade curves and audio ducking. We trained a Convolutional Recurrent Neural Network (CRNN) on this synthesised data and outperformed state-of-the-art algorithms for music-speech detection. This paper demonstrates the data synthesis procedure as a highly effective technique to generate large datasets to train deep neural networks for audio segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.