Deaf-blindness forces people to live in isolation. At present, there is no existing technological solution enabling two (or many) deaf-blind people to communicate remotely among themselves in tactile Sign Language (t-SL). When resorting to t-SL, deaf-blind people can communicate only with people physically present in the same place, because they are required to reciprocally explore their hands to exchange messages. We present a preliminary version of PARLOMA, a novel system to enable remote communication between deaf-blind persons. It is composed of a low-cost depth sensor as the only input device, paired with a robotic hand as the output device. Essentially, any user can perform hand-shapes in front of the depth sensor. The system is able to recognize a set of hand-shapes that are sent over the web and reproduced by an anthropomorphic robotic hand. PARLOMA can work as a "telephone" for deaf-blind people. Hence, it will dramatically improve the quality of life of deaf-blind persons. PARLOMA has been presented and supported by the main Italian deaf-blind association, Lega del Filo d'Oro. End users are involved in the design phase.
We present a novel robotic telepresence platform composed by a semi-autonomous mobile robot based on a cloud robotics framework, which has been developed with the aim of enabling mobility impaired people to enjoy museums and archaeological sites that would be otherwise inaccessible. Such places, in fact, very often are not equipped to provide access for mobility impaired people, in particular because these aids require dedicated infrastructures that may not fit within the environment and large investments. For this reason, people affected by mobility impairments are often unable to enjoy a part or even the entire museum experience. Solutions allowing mobility impaired people to enjoy museum experience are often based on recorded tours, thus they do not allow active participation of the user. On the contrary, the presented platform is intended to allow users to enjoy completely the museum round. A robot equipped with a camera is placed within the museum and users can control it in order to follow predefined tours or freely explore the museum. Our solution ensures that users see exactly what the robot is seing in real-time. The cloud robotics platform controls both navigation capabilities and teleoperation. Navigation tasks are intended to let the robot reliably follow pre-defined tours, while main concern of teleoperation tasks is to ensure robot safety (e.g., by means of dynamic obstacle detection and avoidance software). Proposed platform has been optimized to maximize user experience.
We present a novel open-source 3D-printable dexterous anthropomorphic robotic hand specifically designed to reproduce Sign Languages' hand poses for deaf and deafblind users. We improved the InMoov hand, enhancing dexterity by adding abduction/adduction degrees of freedom of three fingers (thumb, index and middle fingers) and a three-degrees-of-freedom parallel spherical joint wrist. A systematic kinematic analysis is provided. The proposed robotic hand is validated in the framework of the PARLOMA project. PARLOMA aims at developing a telecommunication system for deaf-blind people, enabling remote transmission of signs from tactile Sign Languages. Both hardware and software are provided online to promote further improvements from the community.
Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.
Advancements in the study of the human sense of touch are fueling the field of haptics. This is paving the way for augmenting sensory perception during object palpation in tele-surgery and reproducing the sensed information through tactile feedback. Here, we present a novel tele-palpation apparatus that enables the user to detect nodules with various distinct stiffness buried in an ad-hoc polymeric phantom. The contact force measured by the platform was encoded using a neuromorphic model and reproduced on the index fingertip of a remote user through a haptic glove embedding a piezoelectric disk. We assessed the effectiveness of this feedback in allowing nodule identification under two experimental conditions of real-time telepresence: In Line of Sight (ILS), where the platform was placed in the visible range of a user; and the more demanding Not In Line of Sight (NILS), with the platform and the user being 50 km apart. We found that the entailed percentage of identification was higher for stiffer inclusions with respect to the softer ones (average of 74% within the duration of the task), in both telepresence conditions evaluated. These promising results call for further exploration of tactile augmentation technology for telepresence in medical interventions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.