We have developed a system built on our mobile AR platform that provides users with see-through vision, allowing visualization of occluded objects textured with real-time video information. We present a user study that evaluates the user's ability to view this information and understand the appearance of an outdoor area occluded by a building while using a mobile AR computer. This understanding was compared against a second group of users who watched video footage of the same outdoor area on a regular computer monitor. The comparison found an increased accuracy in locating specific points from the scene for the outdoor AR participants. The outdoor participants also displayed more accurate results, and showed better speed improvement than the indoor group when viewing more than one video simultaneously.
KEYWORDS:
INTRODUCTIONAugmented Reality (AR) can be used to augment the user's view of the world with both virtual information and virtual views of realworld information [1]. This paper investigates augmenting the user's view with occluded real locations, using videos captured at remote locations and 3D geometric models of the environment. We have developed a system that can render photo-realistic views of occluded locations that are displayed relative to the user's physical real-world location. In this case an occluded object or location could be a car or building hidden behind another building as seen in Figure 1. The system has been designed so that texture information is sourced from a video stream from the occluded location that is captured from a robot [1], other AR users, or surveillance camera [5]. It is assumed that the source of video information is equipped with position and orientation sensors to aid the rendering system.Previous research has investigated visualizing occluded objects for outdoor AR [6,11] and systems capable of rendering photorealistic 3D scenes of real environments intended for indoor use at a desktop computer [7,9,10]. When users view occluded objects in their real-world locations using AR, they can easily comprehend the position, orientation and size.Viewing remote video images by rendering them on the user's display has been shown to be usable and understandable [11]. When a user is able to see their own surroundings with correctly registered occluded locations directly overlaid (as with AR), they are easily able to determine spatial relationships between the relevant locations. The extreme alternative for the user is to view multiple remote videos on a regular display unaltered. This would pose problems to users as they have to manually determine the spatial relationships between videos. While this requires increased cognitive load for the user, the video images are unaltered, and so are at the highest quality possible.In this paper we present a study investigating how well users understand video sequences recorded at various locations by comparing current techniques with an image-based rendering technique on an outdoor wearable AR system. While previous research has eval...