This paper presents a new multi-layer line detection algorithm that can be used for embedded robotic applications. First, contour points are detected along equally spaced columns or rows. Points are stored into a multilayer array. Then the line detection algorithm is run layer by layer. It is based on Wall and Danielson algorithm. Experiments have been carried out on real images taken from a robot's built-in camera. The line detection rate and accuracy are evaluated. This new algorithm features several parameters that can be adjusted for various embedded applications.
I. IntroductionIn robotic applications, lines bring information on edges and borders of polygonal regions. If the line detection process is part of an embedded vision system, it must meet strong real-time constraints to keep the image treatment rate to the videorate. It must also return reliable information to both localization and behavior modules of the robot. As a matter of fact, lines are widely used for selflocalization.This paper presents a new multi-layer line detection algorithm that can be used for embedded robotic applications. The algorithm has been implemented on a quadruped robot equipped with a CMOS camera of 208×176 resolution. The main constraints come from a limited computing power. Because of leg impacts on the ground, the images can also be bouncing, which keeps from smoothing techniques. The robustness of the method proposed here is a key factor. The robot must be able to detect lines while moving or under varying lighting conditions. The first part of this paper focuses on fast edge detection and explains how edge points are stored into multi-layer arrays. The second part describes the initial line detection algorithm. Then an improved method is introduced in order to avoid the main drawbacks of the previous one. Both methods are then compared in terms of robustness to noisy edge detection as the robot should be able to detect lines even under bad lighting conditions. The processing time of both treatments is also compared.