IntroductionTime Follower's Vision is an innovative visual presentation system for remote vehicle control. Autonomous robots provide a wide variety of enhancements and improvements to daily life, and research of this class of robot continues to advance worldwide. However, for fields like medical treatment, search-and-rescue missions, and interpersonal communication, the optimal approach is generally a robot with advanced movement capabilities but a non-autonomous control mechanism.A non-autonomous type robot requires an operator in control, so an efficient human interface system is essential for good performance. For robotic systems controlled via telexistence, the operator performs remote tasks dexterously with the physical feeling of existing in a surrogate robot working in the remote environment. Although efficient operation systems enabling the operator to sense the remote environment have been developed, they require large-scale and high-cost equipment [Tachi 1998]. Further, operators require extensive training to adequately presume the posture of the vehicle from limited information.Time Follower's Vision is a control system that solves these problems by producing a virtual image using Mixed Reality technology, and presenting the surrounding environment of the vehicle and the vehicle's status to the operator. Therefore, even for inexperienced operators, the posture of the vehicle and the surrounding situation can be readily understood. ExpositionTime Follower's Vision is a vision presentation that captures the size, position and environment of the vehicle, which allows even inexperienced operators to control it with ease. This is achieved by using a simple camera and presenting the captured image onto a device like a computer monitor, where the operator controls the robot by simply looking at the image presented on the screen. Figure 1 shows a vehicle equipped with a camera controlled with Time Follower's Vision. Figure 1: A vehicle operated remotely Figure 2: A snapshot and system outlineThrough this technique, the image captured by the vehicle's camera during remote-control operation is stored in a database along with time and position information. The system searches for the optimal image data within the database, which provides information on both the current position and environment of the vehicle. The image is selected by an evaluation function considering the field of view, the position of the camera, and the current position of the vehicle. After an image has been selected, a CG model of the vehicle is mapped with the chosen image, and a viewpoint is presented to give the operator the impression of actually existing behind the vehicle. Finally, the human interface is developed to allow virtually anyone to perform the control with ease by viewing both the vehicle and its environment. Figure 2 shows a snapshot of an actual system and system outline. The operator can correctly understand the relation of the position of the vehicle and the surrounding environment. Using this system, even if the vehicle ...
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.