This paper presents the application of a Bayesian nonparametric time-series model to process monitoring and fault classification for industrial robotic tasks. By means of an alignment task performed with a real robot, we show how the proposed approach allows to learn a set of sensor signature models encoding the spatial and temporal correlations among wrench measurements recorded during a number of successful task executions. Using these models, it is possible to detect continuously and on-line deviations from the expected sensor readings. Separate models are learned for a set of possible error scenarios involving a human modifying the workspace configuration. These non-nominal task executions are correctly detected and classified with an on-line algorithm, which opens the possibility for the development of error-specific recovery strategies. Our work is complementary to previous approaches in robotics, where process monitors based on probabilistic models, but limited to contact events, were developed for control purposes. Instead, in this paper we focus on capturing dynamic models of sensor signatures throughout the whole task, therefore allowing continuous monitoring and extending the system ability to interpret and react to errors.
Removing the safety fences that separate humans and robots, to allow for an effective human-robot interaction, requires innovative safety control systems. An advanced functionality of a safety controller might be to detect the presence of humans entering the robotic cell and to estimate their intention, in order to enforce an effective safety reaction. This paper proposes advanced algorithms for cognitive vision, empowered by a dynamic model of human walking, for detection and tracking of humans. Intention estimation is then addressed as the problem of predicting online the trajectory of the human, given a set of trajectories of walking people learnt offline using an unsupervised classification algorithm. Results of the application of the presented approach to a large number of experiments on volunteers are also reported.
This paper presents an approach to recognize 6 DOF rigid body motion trajectories (3D translation + rotation), such as the 6 DOF motion trajectory of an object manipulated by a human. As a first step in the recognition process, 3D measured position trajectories of arbitrary and uncalibrated points attached to the rigid body are transformed to an invariant, coordinate-free representation of the rigid body motion trajectory. This invariant representation is independent of the reference frame in which the motion is observed, the chosen marker positions, the linear scale (magnitude) of the motion, the time scale and the velocity profile along the trajectory. Two classification algorithms which use the invariant representation as input are developed and tested experimentally: one approach based on a Dynamic Time Warping algorithm, and one based on Hidden Markov Models. Both approaches yield high recognition rates (up to 95 % and 91 %, respectively). The advantage of the invariant approach is that motion trajectories observed in different contexts (with different reference frames, marker positions, time scales, linear scales, velocity profiles) can be compared and averaged, which allows us to build models from multiple demonstrations observed in different contexts, and use these models to recognize similar motion trajectories in still different contexts.
Removing the safety fences that separate humans and robots, to allow for an effective human-robot interaction, requires innovative safety control systems. An advanced functionality of a safety controller might be to detect the presence of humans entering the robotic cell and to estimate their intention, in order to enforce an effective safety reaction. This paper proposes advanced algorithms for cognitive vision, empowered by a dynamic model of human walking, for detection and tracking of humans. Intention estimation is then addressed as the problem of predicting online the trajectory of the human, given a set of trajectories of walking people learnt offline using an unsupervised classification algorithm. Results of the application of the presented approach to a large number of experiments on volunteers are also reported.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.