One way to deal with occlusions or loss of tracking of the visual features used for visual servoing tasks is to predict the feature behavior in the image plane when the measurements are missing. Different prediction and correction methods have already been proposed in the literature. The purpose of this paper is to compare and experimentally validate some of these methods for eye-in-hand and eye-to-hand configurations. In particular, we show that a correction based both on the image and the camera/target pose provides the best results.
Abstract-Predicting the behavior of visual features on the image plane over a future time horizon is an important possibility in many different control problems. For example when dealing with occlusions (or other constraints such as joint limits) in a classical visual servoing loop, or also in the more advanced model predictive control schemes recently proposed in the literature. Several possibilities have been proposed to perform the initial correction step for then propagating the visual features by exploiting the measurements currently available by the camera. But the predictions proposed so far are inaccurate in situations where the depths of the tracked points are not correctly estimated. We then propose in this paper a new correction strategy which tries to directly correct the relative pose between the camera and the target instead of only adjusting the error on the image plane. This correction is then analysed and compared by evaluating the corresponding improvements in the feature prediction phase.
We propose in this paper a new active perception scheme based on Model Predictive Control under constraints for generating a sequence of visual servoing tasks. The proposed control scheme is used to compute the motion of a camera whose task is to successively observe a set of robots for measuring their position and improving the accuracy of their localization. This method is based on the prediction of an uncertainty model (due to actuation and measurement noise) for determining which robot has to be observed by the camera. Simulation results are presented for validating the approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.