Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.
Title Physiological jaundice occurs in the first week of life in newborns due to the increase in bilirubin level which in turn leads to yellowish discolouration of skin and sclera. Sever jaundice and toxic level of bilirubin can cause brain damage as bilirubin exists in the central nervous systems. Invasive blood sampling is the optimum method to measure bilirubin level; however, it is painful and stressful for the neonate, and it may cause blood loss and can lead to anaemia, especially when repeated blood tests are required. In addition, blood tests expose the infant to the risk of infections. Moreover, invasive tests are time-consuming as their results are not immediate. Due to all the problems mentioned earlier, this paper proposes a new system for jaundice detection that is based on skin colour analysis. The proposed system uses a digital camera as a colour based screening tool as it is affordable, objective, ubiquitous, and less painful to infants. Based on the analysis obtained from the captured images, jaundice was detected and estimated, opening the door for further case studies in medical applications, especially in diagnosis, monitoring patient’s health, and supplying active treatment.
Technological advances have allowed hand gestures to become an important research field especially in applications such as health care and assisting applications for elderly people, providing a natural interaction with the assisting system through a camera by making specific gestures. In this study, we proposed three different scenarios using a Microsoft Kinect V2 depth sensor then evaluated the effectiveness of the outcomes. The first scenario used joint tracking combined with a depth threshold to enhance hand segmentation and efficiently recognise the number of fingers extended. The second scenario utilised the metadata parameters provided by the Kinect V2 depth sensor, which provided 11 parameters related to the tracked body and gave information about three gestures for each hand. The third scenario used a simple convolutional neural network with joint tracking by depth metadata to recognise and classify five hand gesture categories. In this study, deaf-mute elderly people performed five different hand gestures, each related to a specific request, such as needing water, meal, toilet, help and medicine. Next, the request was sent via the global system for mobile communication (GSM) as a text message to the care provider’s smartphone because the elderly subjects could not execute any activity independently.
Computer vision has wide application in medical sciences such as health care and home automation. This study on computer vision for elderly care is based on a Microsoft Kinect sensor considers an inexpensive, three dimensional, non-contact technique, that is comfortable for patients while being highly reliable and suitable for long term monitoring. This paper proposes a hand gesture system for elderly health care based on deep learning convolutional neural network (CNN) that is used to extract features and to classify five gestures according to five categories using a support vector machine (SVM). The proposed system is beneficial for elderly patients who are voiceless or deaf-mute and unable to communicate with others. Each gesture indicates a specific request such as “Water”, “Meal”, “Toilet”, “Help” and “Medicine” and translates as a command sending to a Microcontroller circuit that sends the request to the caregiver’s mobile phone via the global system for mobile communication (GSM). The system was tested in an indoor environment and provides reliable outcomes and a useful interface for older people with disabilities in their limbs to communicate with their families and caregivers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.