Over one billion people in the world suffer from some form of disability. Nevertheless, according to the World Health Organization, people with disabilities are particularly vulnerable to deficiencies in services, such as health care, rehabilitation, support, and assistance. In this sense, recent technological developments can mitigate these deficiencies, offering less-expensive assistive systems to meet users’ needs. This paper reviews and summarizes the research efforts toward the development of these kinds of systems, focusing on two social groups: older adults and children with autism.
Accurate hand pose estimation at joint level has several uses on human-robot interaction, user interfacing and virtual reality applications. However, it is a currently unresolved question. The novel deep learning techniques could make great improvement in this respect but they need an enormous amount of annotated data. The hand pose datasets released so far are impossible to use in deep learning methods as they present issues such as the limited number of samples, high-level abstraction annotations or samples consisting in depth maps.In this work, we introduce a multiview hand pose dataset in which we provide color images of hands and different kind of annotations for each, i.e. the bounding box and the 2D and 3D location on the joints in the hand. Furthermore, we introduce a simple yet accurate deep learning architecture for real-time robust 2D hand pose estimation.
Every year, a significant number of people lose a body part in an accident, through sickness or in high-risk manual jobs. Several studies and research works have tried to reduce the constraints and risks in their lives through the use of technology. This work proposes a learning-based approach that performs gesture recognition using a surface electromyography-based device, the Myo Armband released by Thalmic Labs, which is a commercial device and has eight non-intrusive low-cost sensors. With 35 able-bodied subjects, and using the Myo Armband device, which is able to record data at about 200 MHz, we collected a dataset that includes six dissimilar hand gestures. We used a gated recurrent unit network to train a system that, as input, takes raw signals extracted from the surface electromyography sensors. The proposed approach obtained a 99.90% training accuracy and 99.75% validation accuracy. We also evaluated the proposed system on a test set (new subjects) obtaining an accuracy of 77.85%. In addition, we showed the test prediction results for each gesture separately and analyzed which gestures for the Myo armband with our suggested network can be difficult to distinguish accurately. Moreover, we studied for first time the gated recurrent unit network capability in gesture recognition approaches. Finally, we integrated our method in a system that is able to classify live hand gestures.
In several large retail stores, such as malls, sport or food stores, the customer often feels lost due to the difficulty in finding a product. Although these large stores usually have visual signs to guide customers towards specific products, sometimes these signs are also hard to find and are not updated. In this paper, we propose a system that jointly combines deep learning and augmented reality techniques to provide the customer with useful information. First, the proposed system learns the visual appearance of different areas in the store using a deep learning architecture. Then, customers can use their mobile devices to take a picture of the area where they are located within the store. Uploading this image to the system trained for image classification, we are able to identify the area where the customer is located. Then, using this information and novel augmented reality techniques, we provide information about the area where the customer is located: route to another area where a product is available, 3D product visualization, user location, analytics, etc. The system developed is able to successfully locate a user in an example store with 98% accuracy. The combination of deep learning systems together with augmented reality techniques shows promising results towards improving user experience in retail/commerce applications: branding, advance visualization, personalization, enhanced customer experience, etc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.