Falls are one of the major risks of injury for elderly living alone at home. Computer vision-based systems offer a new, low-cost and promising solution for fall detection. This paper presents a new fall-detection tool, based on a commercial RGB-D camera. The proposed system is capable of accurately detecting several types of falls, performing a real time algorithm in order to determine whether a fall has occurred. The proposed approach is based on evaluating the contraction and the expansion speed of the width, height and depth of the 3D human bounding box, as well as its position in the space. Our solution requires no pre-knowledge of the scene (i.e. the recognition of the floor in the virtual environment) with the only constraint about the knowledge of the RGB-D camera position in the room. Moreover, the proposed approach is able to avoid false positive as: sitting, lying down, retrieve something from the floor. Experimental results qualitatively and quantitatively show the quality of the proposed approach in terms of both robustness and background and speed independence.
Recognizing facial emotions is an important aspect of interpersonal communication that may be impaired in various neurological disorders: Asperger's syndrome, Autism, Schizoid Personality, Parkinsonism, Urbach-Wiethe, Amyotrophic Lateral Sclerosis, Bipolar Disorder, Depression, Alzheimer's desease. Altough it is not possible to define unique emotions, we can say that are mental states, physiological and psychophysiological changes associated with internal or external stimuli, both natural and learned. This paper highlights certain requirements that the specification approach would need to meet if the production of such tools were to be achievable. In particular, we present an innovative and still experimental tool to support diagnosis of neurological disorders by means of facial-expressionsmonitoring. At the same time, we propose a new study to measure several impairments of patients recognizing emotions ability, and to improve the reliability of using them in computer aided diagnosis strategies.
Introduction and objective: the purpose of this work is to design and implement an innovative tool to recognize 16 different human gestural actions and use them to predict 7 different emotional states. The solution proposed in this paper is based on RGB and depth information of 2D/3D images acquired from a commercial RGB-D sensor called Kinect. Materials: the dataset is a collection of several human actions made by different actors. Each action is performed by each actor for three times in each video. 20 actors perform 16 different actions, both seated and upright, totalling 40 videos per actor. Methods: human gestural actions are recognized by means feature extractions as angles and distances related to joints of human skeleton from RGB and depth images. Emotions are selected according to the state-of-the-art.Experimental results: despite truly similar actions, the overallaccuracy reached is approximately 80%. Conclusions and future works: the proposed work seems to be back-ground-and speedindependent, and it will be used in the future as part of a multimodal emotion recognition software based on facial expressions and speech analysis as well.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.