This paper presents a tracking system for ego-motion estimation which fuses vision and inertial measurements using EKF and UKF (Extended and Unscented Kalman Filters), where a comparison of their performance has been done. It also considers the multi-rate nature of the sensors: inertial sensing is sampled at a fast sampling frequency while the sampling frequency of vision is lower. the proposed approach uses a constant linear acceleration model and constant angular velocity model based on quaternions, which yields a non-linear model for states and a linear model in measurement equations. Results show that a significant improvement is obtained on the estimation when fusing both measurements with respect to just vision or just inertial measurements. It is also shown that the proposed system can estimate fast-motions even when vision system fails. Moreover, a study of the influence of the noise covariance is also performed, which aims to select their appropriate values at the tuning process. The setup is an end-effector mounted camera, which allow us to pre-define basic rotational and translational motions for validating results.
Many practical tasks in robotic systems, such as cleaning windows, writing, or grasping, are inherently constrained. Learning policies subject to constraints is a challenging problem. In this paper, we propose a method of constraint-aware learning that solves the policy learning problem using redundant robots that execute a policy that is acting in the null space of a constraint. In particular, we are interested in generalizing learned null-space policies across constraints that were not known during the training. We split the combined problem of learning constraints and policies into two: first estimating the constraint, and then estimating a null-space policy using the remaining degrees of freedom. For a linear parametrization, we provide a closed-form solution of the problem. We also define a metric for comparing the similarity of estimated constraints, which is useful to pre-process the trajectories recorded in the demonstrations. We have validated our method by learning a wiping task from human demonstration on flat surfaces and reproducing it on an unknown curved surface using a force-or torque-based controller to achieve tool alignment. We show that, despite the differences between the training and validation scenarios, we learn a policy that still provides the desired wiping motion.
Este artículo describe el diseño e implementación de un novedoso sistema de inspección basado en visión artificial para detectar defectos en carrocerías de vehículos automóviles. El sistema ha sido implantado en la factoría Ford de Almussafes (Valencia) como consecuencia de varios proyectos de I+D entre Ford España, S.A. y el
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.