Abstract3D imaging systems provide valuable information for autonomous robot navigation based on landmark detection in pipelines. This paper presents a method for using a time-of-flight (TOF) camera for detection and tracking of pipeline features such as junctions, bends and obstacles. Feature extraction is done by fitting a cylinder to images of the pipeline. Data in captured images appear to take a conic rather than cylindrical shape, and we adjust the geometric primitive accordingly. Pixels deviating from the estimated cylinder/cone fit are grouped into blobs. Blobs fulfilling constraints on shape and stability over time are then tracked. The usefulness of TOF imagery as a source for landmark detection and tracking in pipelines is evaluated by comparison to auxiliary measurements. Experiments using a model pipeline and a prototype robot show encouraging results.
Range imagery provided by time-of-flight (TOF) cameras has been shown to be useful to facilitate robot navigation in several applications. Visual navigation for autonomous pipeline inspection robots is a special case of such a task, where the cramped operating environment influences the range measurements in a detrimental way. Inherent in the imaging system are also several defects that will lead to a smearing of range measurements. This paper sketches an approach for using TOF cameras as a visual navigation aid in pipelines, and addresses the challenges concerning the inherent defects in the imaging system and the impact of the operating environment. New results on our previously proposed strategy for detecting and tracking possible landmarks and obstacles in pipelines are presented. We consider an explicit model for correcting lens distortions, and use this to explain why the cylindrical pipe is perceived as a cone. A simplified model, which implicitly handles the combined effects of the environment and the camera on the measured ranges by adjusting for the conical shape, is used to map the robot's environment into an along-axis-view relative to the pipe, which facilitates obstacle traversal. Experiments using a model pipeline and a prototype camera rig are presented.
Submarine oil and gas pipeline inspection is a highly time and cost consuming task. Using an autonomous underwater vehicle (AUV) for such applications represents a great saving potential. However, the AUV navigation system requires reliable localization and stable tracking of the pipeline position. We present a method for robust pipeline localization relative to the AUV in 3D based on stereo vision and echo sounder depth data. When the pipe is present in both camera images, a standard stereo vision approach is used for localization. Enhanced localization continuity is ensured using a second approach when the pipe is segmented out in only one of the images. This method is based on a combination of one camera with depth information from the echo sounder mounted on the AUV. In the algorithm, the plane spanned by the pipe in the camera image is intersected with the plane spanned by the sea floor, to give the pipe position in 3D relative to the AUV. Closed water recordings show that the proposed method localizes the pipe with an accuracy comparable to that of the stereo vision method. Furthermore, the introduction of a second pipe localization method increases the true positive pipe localization rate by a factor of four.
We present an implementation of a novel foveating 3D sensor concept, inspired by the human eye, which intends to allow future robots to better interact with their surroundings. The sensor is based on a time-of-flight laser scanning technology, where each range distance measurement is performed individually for increased quality. Micro-mirrors enable detailed control on where and when each sample point is acquired in the scene. By finding regions-of-interest (ROIs) and mainly concentrating the data acquisition here, the spatial resolution or frame rate of these ROIs can be significantly increased compared to a non-foveating system.Foveation is enabled through a real-time implementation of a feed-back control loop for the sensor hardware, based on vision algorithms for 3D scene analysis. In this paper, we describe and apply an algorithm for detecting ROIs based on motion detection in range data using background modeling. Heuristics are incorporated to cope with camera motion. We report first results applying this algorithm to scenes with moving objects, and show that the foveation capability allows the frame rate to be increased by up to 8.2 compared to a non-foveating sensor, utilizing up to 99% of the potential frame rate increase. The incorporated heuristics significantly improves the foveation's performance for moving camera scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.