In this paper, we applied the concept of diminished reality to remove content-irrelevant pedestrian (i.e., real object) in the context of handheld augmented reality (AR). We prepared three view conditions: in Transparent (TP) condition, we removed the pedestrian entirely; in Semi-transparent (STP) condition, the pedestrian became semi-transparent; lastly, in Default (DF) condition, the pedestrian appeared as is. We conducted a user study to compare the effects of the three conditions on users' engagement and perception of a virtual pet in the AR content. Our findings revealed that users felt less distracted to the AR content in TP and STP conditions, compared to the DF condition. Furthermore, users felt the virtual pet as more lifelike , its behavior more plausible, and felt a higher spatial presence in the real environment, in the TP condition. CCS CONCEPTS • Human-centered computing → User studies; Mixed / augmented reality.
People are interested in traveling in an infinite virtual environment, but no standard navigation method exists yet in Virtual Reality (VR). The Walking-In-Place (WIP) technique is a navigation method that simulates movement to enable immersive travel with less simulator sickness in VR. However, attaching the sensor to the body is troublesome. A previously introduced method that performed WIP using an Inertial Measurement Unit (IMU) helped address this problem. That method does not require placement of additional sensors on the body. That study proved, through evaluation, the acceptable performance of WIP. However, this method has limitations, including a high step-recognition rate when the user does various body motions within the tracking area. Previous works also did not evaluate WIP step recognition accuracy. In this paper, we propose a novel WIP method using position and orientation tracking, which are provided in the most PC-based VR HMDs. Our method also does not require additional sensors on the body and is more stable than the IMU-based method for non-WIP motions. We evaluated our method with nine subjects and found that the WIP step accuracy was 99.32% regardless of head tilt, and the error rate was 0% for squat motion, which is a motion prone to error. We distinguish jog-in-place as “intentional motion” and others as “unintentional motion”. This shows that our method correctly recognizes only jog-in-place. We also apply the saw-tooth function virtual velocity to our method in a mathematical way. Natural navigation is possible when the virtual velocity approach is applied to the WIP method. Our method is useful for various applications which requires jogging.
Interactions with embodied conversational agents can be enhanced using human-like co-speech gestures. Traditionally, rule-based co-speech gesture mapping has been utilized for this purpose. However, the creation of this mapping is laborious and often requires human experts. Moreover, human-created mapping tends to be limited, therefore prone to generate repeated gestures. In this article, we present an approach to automate the generation of rule-based co-speech gesture mapping from publicly available large video data set without the intervention of human experts. At run-time, word embedding is utilized for rule searching to get the semantic-aware, meaningful, and accurate rule. The evaluation indicated that our method achieved comparable performance with the manual map generated by human experts, with a more variety of gestures activated. Moreover, synergy effects were observed in users' perception of generated co-speech gestures when combined with the manual map.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.