Falls are one of the major causes leading to injury of elderly people. Using wearable devices for fall detection has a high cost and may cause inconvenience to the daily lives of the elderly. In this paper, we present an automated fall detection approach that requires only a low-cost depth camera. Our approach combines two computer vision techniques-shape-based fall characterization and a learning-based classifier to distinguish falls from other daily actions. Given a fall video clip, we extract curvature scale space (CSS) features of human silhouettes at each frame and represent the action by a bag of CSS words (BoCSS). Then, we utilize the extreme learning machine (ELM) classifier to identify the BoCSS representation of a fall from those of other actions. In order to eliminate the sensitivity of ELM to its hyperparameters, we present a variable-length particle swarm optimization algorithm to optimize the number of hidden neurons, corresponding input weights, and biases of ELM. Using a low-cost Kinect depth camera, we build an action dataset that consists of six types of actions (falling, bending, sitting, squatting, walking, and lying) from ten subjects. Experimenting with the dataset shows that our approach can achieve up to 91.15% sensitivity, 77.14% specificity, and 86.83% accuracy. On a public dataset, our approach performs comparably to state-of-the-art fall detection methods that need multiple cameras.
In this paper, we propose a novel threedimensional combining features method for sign language recognition. Based on the Kinect depth data and the skeleton joints data, we acquire the 3D trajectories of right hand, right wrist and right elbow. To construct feature vector, the paper uses combining location and spherical coordinate feature representation. The proposed approach utilizes the feature representation in spherical coordinate system effectively depicting the kinematic connectivity among hand, wrist and elbow for recognition. Meanwhile, 3D trajectory data acquired from Kinect avoid the interference of the illumination change and cluttered background. In experiments with a dataset of 20 gestures from Chinese sign language, the Extreme Learning Machine(ELM) is tested, compared with Support Vector Machine(SVM), the superior recognition performance is verified.
Extreme LearningMachine (ELM) for Single-hidden Layer Feedforward Neural Network (SLFN) has been attracting attentions because of its faster learning speed and better generalization performance than those of the traditional gradient-based learning algorithms. However, it has been proven that generalization performance of ELM classifier depends critically on the number of hidden neurons and the random determination of the input weights and hidden biases. In this paper, we propose Variable-length Particle Swarm Optimization algorithm (VPSO) for ELM to automatically select the number of hidden neurons as well as corresponding input weights and hidden biases for maximizing ELM classifier's generalization performance. Experimental results have verified that the proposed VPSO-ELM scheme significantly improves the testing accuracy of classification problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.