This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot's end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator's ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E.
Dealing safely with nuclear waste is an imperative for the nuclear industry. Increasingly, robots are being developed to carry out complex tasks such as perceiving, grasping, cutting, and manipulating waste. Radioactive material can be sorted, and either stored safely or disposed of appropriately, entirely through the actions of remotely controlled robots. Radiological characterisation is also critical during the decommissioning of nuclear facilities. It involves the detection and labelling of radiation levels, waste materials, and contaminants, as well as determining other related parameters (e.g., thermal and chemical), with the data visualised as 3D scene models. This paper overviews work by researchers at the QMUL Centre for Advanced Robotics (ARQ), a partner in the UK EPSRC National Centre for Nuclear Robotics (NCNR), a consortium working on the development of radiation-hardened robots fit to handle nuclear waste. Three areas of nuclear-related research are covered here: human–robot interfaces for remote operations, sensor delivery, and intelligent robotic manipulation.
Robotic manipulation is fundamental to many realworld applications; however, it is an unsolved problem, which remains a very active research area. New algorithms for robot perception and control are frequently proposed by the research community. These methods must be thoroughly evaluated in realistic conditions, before they can be adopted by industry. This process can be extremely time consuming, mainly due to the complexity of integrating different hardware and software components. Hence, we propose the Grasping Robot Integration and Prototyping (GRIP) system, a robot-agnostic software framework that enables visual programming and fast prototyping of robotic grasping and manipulation tasks. We present several applications that have been programmed with GRIP, and report a user study which indicates that the framework enables naive users to implement robotic tasks correctly and efficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.