Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies on multi-sensory integration of visual and haptic cues revealed that the integration follows maximum likelihood estimation (MLE). Little effort focuses however on integrating force and vibrotactile cues-two sub-categorical cues of the haptic modality. Hence, this paper presents an investigation on MLE's suitability for integrating these sub-categorical cues. Within a stereoscopic VE, human users performed a 3D interactive task of navigating a flying drone along a high-voltage transmission line in an inaccessible region and identifying defects on the line. The users had to identify defects via individual force or vibrotactile cues, and their combinations in colocated and dislocated settings. The co-located setting provided both cues on the right hand of the users; whereas the dislocated setting delivered the force and vibrotactile cues on the right hand and forearm of the users, respectively. Task performance of the users, such as completion time and accuracy, was assessed under each cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This was in agreement with the role of tactile cues in sensing surface properties, herein setting a baseline for using MLE. The task performance under the co-located setting indicated certain degrees of combining those under the individual cues. In contrast, the performance under the dislocated setting was alike that under the individual vibrotactile cue. These observations imply an inconclusiveness of MLE to integrate both cues in a co-located setting for 3D user interaction.
is an open access repository that collects the work of Arts et Métiers ParisTech researchers and makes it freely available over the web where possible. This is an author-deposited version published in: https://sam.ensam.eu Handle ID Abstract-In a three-dimensional (3D) virtual environment (VE), proper collaboration between vibrotactile and force cuestwo cues of the haptic modality -is important to facilitate task performance of human users.Many studies report that collaborations between multi-sensory cues follow maximum likelihood estimation (MLE). However, an existing work finds that MLE yields a mean and an amplitude mismatches when interpreting the collaboration between the vibrotactile and force cues. We thus proposed mean-shifted MLE and conducted a human study to investigate the mismatches. For the study, we created a VE to replicate the visual scene, the 3D interactive task, and the cues from the existing work. Our participants were biased to rely on the vibrotactile cue for their tasks, departing from unbiased reliance on both cues in the existing work. Assessments of task completion time and task accuracy validated the replication. We found that based on task accuracy MLE explained the cue collaboration to certain degrees, agreed with the existing work. Mean-shifted MLE remedied the mean mismatch, but maintained the amplitude mismatch. Further examinations revealed that the collaboration between both cues may not be entirely additive. This sheds an insight for proper modeling of the collaboration between the vibrotactile and force cues to aid interactive tasks in VEs.
is an open access repository that collects the work of Arts et Métiers ParisTech researchers and makes it freely available over the web where possible.This is an author-deposited version published in: https://sam.ensam.eu Handle
Current virtual environments (VE) enable perceiving haptic stimuli to facilitate 3D user interaction, but lack brain-interfacial contents. Using electroencephalography (EEG), we undertook a feasibility study on exploring event-related potential (ERP) patterns of the user's brain responses during haptic interaction within a VE. The interaction was flying a virtual drone along a curved transmission line to detect defects under the stimuli (e.g., force increase and/or vibrotactile cues). We found that there were variations in the peak amplitudes and latencies (as ERP patterns) of the responses at about 200 ms post the onset of the stimuli. The largest negative peak occurred during 200~400 ms after the onset in all vibration-related blocks. Moreover, the amplitudes and latencies of the peak were differentiable among the vibration-related blocks. These findings imply feasible decoding of the brain responses during haptic interaction within VEs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.