This study evaluates the effectiveness of an AR-based contextaware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such contextaware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.
Drone navigation in complex environments poses many problems to teleoperators. Especially in three dimensional (3D) structures such as buildings or tunnels, viewpoints are often limited to the drone’s current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation.
To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and provide smooth operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D point-cloud information into account to modify the user’s viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera, and we use the resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first- and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.