The use of smartphones for human activity recognition has become popular due to the wide adoption of smartphones and their rich sensing features. This article introduces a benchmark dataset, the MobiAct dataset, for smartphone-based human activity recognition. It comprises data recorded from the accelerometer, gyroscope and orientation sensors of a smartphone for fifty subjects performing nine different types of Activities of Daily Living (ADLs) and fifty-four subjects simulating four different types of falls. This dataset is used to elaborate an optimized feature selection and classification scheme for the recognition of ADLs, using the accelerometer recordings. Special emphasis was placed on the selection of the most effective features from feature sets already validated in previously published studies. An important qualitative part of this investigation is the implementation of a comparative study for evaluating the proposed optimal feature set using both the MobiAct dataset and another popular dataset in the domain. The results obtained show a higher classification accuracy than previous reported studies, which exceeds 99% for the involved ADLs.
Depression is one of the most prevalent mental disorders, burdening many people world-wide. A system with the potential of serving as a decision support system is proposed, based on novel features extracted from facial expression geometry and speech, by interpreting non-verbal manifestations of depression. The proposed system has been tested both in gender independent and gender based modes, and with different fusion methods. The algorithms were evaluated for several combinations of parameters and classification schemes, on the dataset provided by the Audio/Visual Emotion Challenge of 2013 and 2014. The proposed framework achieved a precision of 94.8% for detecting persons achieving high scores on a self-report scale of depressive symptomatology. Optimal system performance was obtained using a nearest neighbour classifier on the decision fusion of geometrical features in the gender independent mode, and audio based features in the gender based mode; single visual and audio decisions were combined with the OR binary operation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.