In this paper, we propose a new approach to visionbased robot control, called 2-1/2-D visual servoing, which avoids the respective drawbacks of classical position-based and imagebased visual servoing. Contrary to the position-based visual servoing, our scheme does not need any geometric three-dimensional (3-D) model of the object. Furthermore and contrary to imagebased visual servoing, our approach ensures the convergence of the control law in the whole task space. 2-1/2-D visual servoing is based on the estimation of the partial camera displacement from the current to the desired camera poses at each iteration of the control law. Visual features and data extracted from the partial displacement allow us to design a decoupled control law controlling the six camera d.o.f. The robustness of our visual servoing scheme with respect to camera calibration errors is also analyzed: the necessary and sufficient conditions for local asymptotic stability are easily obtained. Then, due to the simple structure of the system, sufficient conditions for global asymptotic stability are established. Finally, experimental results with an eye-in-hand robotic system confirm the improvement in the stability and convergence domain of the 2-1/2-D visual servoing with respect to classical position-based and image-based visual servoing.Index Terms-Eye-in-hand system, scaled Euclidean reconstruction, visual servoing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.