Thanks to the efforts of the robotics and autonomous systems community,
robots are becoming ever more capable. There is also an increasing demand from
end-users for autonomous service robots that can operate in real environments
for extended periods. In the STRANDS project we are tackling this demand
head-on by integrating state-of-the-art artificial intelligence and robotics
research into mobile service robots, and deploying these systems for long-term
installations in security and care environments. Over four deployments, our
robots have been operational for a combined duration of 104 days autonomously
performing end-user defined tasks, covering 116km in the process. In this
article we describe the approach we have used to enable long-term autonomous
operation in everyday environments, and how our robots are able to use their
long run times to improve their own performance
A novel continuous genetic algorithm (CGA) along with distance algorithm for solving collisions-free path planning problem for robot manipulators is presented in this paper. Given the desired Cartesian path to be followed by the manipulator, the robot configuration as described by the D-H parameters, and the available stationary obstacles in the workspace of the manipulator, the proposed approach will autonomously select a collision free path for the manipulator that minimizes the deviation between the generated and the desired Cartesian path, satisfy the joints limits of the manipulator, and maximize the minimum distance between the manipulator links and the obstacles. One of the main features of the algorithm is that it avoids the manipulator kinematic singularities due to the inclusion of forward kinematics model in the calculations instead of the inverse kinematics. The new robot path planning approach has been applied to two different robot configurations; 2R and PUMA 560, as non-redundant manipulators. Simulation results show that the proposed CGA will always select the safest path avoiding obstacles within the manipulator workspace regardless of whether there is a unique feasible solution, in terms of joint limits, or there are multiple feasible solutions. In addition to that, the generated path in Cartesian space will be of very minimal deviation from the desired one.
We present a cognitively plausible system capable of acquiring knowledge in language and vision from pairs of short video clips and linguistic descriptions. The aim of this work is to teach a robot manipulator how to execute natural language commands by demonstration. This is achieved by first learning a set of visual 'concepts' that abstract the visual feature spaces into concepts that have human-level meaning. Second, learning the mapping/grounding between words and the extracted visual concepts. Third, inducing grammar rules via a semantic representation known as Robot Control Language (RCL). We evaluate our approach against state-of-the-art supervised and unsupervised grounding and grammar induction systems, and show that a robot can learn to execute never seenbefore commands from pairs of unlabelled linguistic and visual inputs.
For autonomous robots to collaborate on joint tasks with humans they require a shared understanding of an observed scene. We present a method for unsupervised learning of common human movements and activities on an autonomous mobile robot, which generalises and improves on recent results. Our framework encodes multiple qualitative abstractions of RGBD video from human observations and does not require external temporal segmentation. Analogously to information retrieval in text corpora, each human detection is modelled as a random mixture of latent topics. A generative probabilistic technique is used to recover topic distributions over an auto-generated vocabulary of discrete, qualitative spatio-temporal code words. We show that the emergent categories align well with human activities as interpreted by a human. This is a particularly challenging task on a mobile robot due to the varying camera viewpoints which lead to incomplete, partial and occluded human detections.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.