Virtual Reality (VR) has shown promising potentials in many applications, such as e-business, healthcare, and social networking. Rich information regarding user's activities and their online accounts is stored in VR devices. If it is carelessly unattended, then attackers, including insiders, can make use of the stored information to, for example, perform in-app purchases at the legitimate owner's expenses. Current solutions, mostly following schemes designed for general personal devices, have been proved vulnerable to shoulder-surfing attacks due to the sight blocking caused by the headset. Although there have been efforts trying to fill this gap, they either rely on some highly advanced equipment, such as electrodes to read brainwaves, or introduce heavy cognitive load that has users perform a series of cumbersome authentication tasks. Therefore, an authentication method for VR devices that is robust and convenient is in dire need. In this paper, we present the design, implementation, and evaluation of a two-factor user authentication scheme, BlinKey, for VR devices that are equipped with an eye tracker. A user's secret passcode is a set of recorded rhythms when he/she blinks, together with the unique pupil size variation pattern. We call this passcode as a blinkey, which can be jointly characterized by knowledge-based and biometric features. To examine the performances, BlinKey is implemented on an HTC Vive Pro with a Pupil Labs eye tracker. Through extensive experimental evaluations with 52 participants, we show that our scheme can achieve the average EER as low as 4.0% with only 6 training samples. Besides, it is robust against various types of attacks. BlinKey also exhibits satisfactory usability in terms of login attempts, memorability, and impact of user motions. We also carry out questionnaire-based pre-/post-studies. The survey result indicates that BlinKey is well accepted as a user authentication scheme for VR devices.
As virtual reality (VR) offers an unprecedented experience than any existing multimedia technologies, VR videos, or called 360-degree videos, have attracted considerable attention from academia and industry. How to quantify and model end users' perceived quality in watching 360-degree videos, or called QoE, resides the center for high-quality provisioning of these multimedia services. In this work, we present EyeQoE, a novel QoE assessment model for 360-degree videos using ocular behaviors. Unlike prior approaches, which mostly rely on objective factors, EyeQoE leverages the new ocular sensing modality to comprehensively capture both subjective and objective impact factors for QoE modeling. We propose a novel method that models eye-based cues into graphs and develop a GCN-based classifier to produce QoE assessment by extracting intrinsic features from graph-structured data. We further exploit the Siamese network to eliminate the impact from subjects and visual stimuli heterogeneity. A domain adaptation scheme named MADA is also devised to generalize our model to a vast range of unseen 360-degree videos. Extensive tests are carried out with our collected dataset. Results show that EyeQoE achieves the best prediction accuracy at 92.9%, which outperforms state-of-the-art approaches. As another contribution of this work, we have publicized our dataset on https://github.com/MobiSec-CSE-UTA/EyeQoE_Dataset.git.
To cross uncontrolled roadways, where no traffic-halting signal devices are present, pedestrians with visual impairments must rely on their other senses to detect oncoming vehicles and estimate the correct crossing interval in order to avoid potentially fatal collisions. To overcome the limitations of human auditory performance, which can be particularly impacted by weather or background noise, we develop an assisting tool called Acoussist, which relies on acoustic ranging to provide an additional layer of protection for pedestrian safety. The vision impaired can use the tool to double-confirm surrounding traffic conditions before they proceed through a non-signaled crosswalk. The Acoussist tool is composed of vehicle-mounted external speakers that emit acoustic chirps at a frequency range imperceptible by human ears, but detectable by smartphones operating the Acoussist app. This app would then communicate to the user when it is safe to cross the roadway. Several challenges exist when applying the acoustic ranging to traffic detection, including measuring multiple vehicles' instant velocities and directions with the presence many of them who emit homogeneous signals simultaneously. We address these challenges by leveraging insights from formal analysis on received signals' time-frequency (t-f) profiles. We implement a proof-of-concept of Acoussist using commercial off-the-shelf (COTS) portable speakers and smartphones. Extensive in-field experiments have been conducted to validate the effectiveness of Acoussist in improving mobility for people with visual impairments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.