Falls are one of the major risks of injury for elderly living alone at home. Computer vision-based systems offer a new, low-cost and promising solution for fall detection. This paper presents a new fall-detection tool, based on a commercial RGB-D camera. The proposed system is capable of accurately detecting several types of falls, performing a real time algorithm in order to determine whether a fall has occurred. The proposed approach is based on evaluating the contraction and the expansion speed of the width, height and depth of the 3D human bounding box, as well as its position in the space. Our solution requires no pre-knowledge of the scene (i.e. the recognition of the floor in the virtual environment) with the only constraint about the knowledge of the RGB-D camera position in the room. Moreover, the proposed approach is able to avoid false positive as: sitting, lying down, retrieve something from the floor. Experimental results qualitatively and quantitatively show the quality of the proposed approach in terms of both robustness and background and speed independence.
Robotic exoskeletons are being increasingly and successfully used in neuro-rehabilitation therapy scenarios. Indeed, they allow patients to perform movements requiring more complex inter-joint coordination and gravity counterbalancing, including assisted object grasping. We propose a robust RGB-D camera-based approach for automated tracking of both still and moving objects that can be used for assisting the reaching/grasping tasks in the aforementioned scenarios. The proposed approach allows to work with non pre-processed objects, giving the possibility to propose a flexible therapy. Moreover, our system is specialized to estimate the pose of cylinder-like shaped objects to allow cylinder grasps with the help of a robotic hand orthosis.\ud
To validate our method both in terms of tracking and of reaching/grasping performances, we present the results achieved conducting tests both on simulations and on real robotic-assisted tasks performed by a patient
Soft growing robots are proposed for use in applications such as complex manipulation tasks or navigation in disaster scenarios. Safe interaction and ease of production promote the usage of this technology, but soft robots can be challenging to teleoperate due to their unique degrees of freedom. In this paper, we propose a human-centered interface that allows users to teleoperate a soft growing robot for manipulation tasks using arm movements. A study was conducted to assess the intuitiveness of the interface and the performance of our soft robot, involving a pick-and-place manipulation task. The results show that users completed the task with a success rate of 97%, achieving placement errors below 2 cm on average. These results demonstrate that our body-movementbased interface is an effective method for control of a soft growing robot manipulator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.