Human falls rarely occur; however, detecting falls is very important from the health and safety perspective. Due to the rarity of falls, it is difficult to employ supervised classification techniques to detect them. Moreover, in these highly skewed situations it is also difficult to extract domain specific features to identify falls. In this paper, we present a novel framework, DeepFall, which formulates the fall detection problem as an anomaly detection problem. The DeepFall framework presents the novel use of deep spatio-temporal convolutional autoencoders to learn spatial and temporal features from normal activities using non-invasive sensing modalities. We also present a new anomaly scoring method that combines the reconstruction score of frames across a video sequences to detect unseen falls. We tested the DeepFall framework on three publicly available datasets collected through non-invasive sensing modalities, thermal camera and depth cameras and show superior results in comparison to traditional autoencoder and convolutional autoencoder methods to identify unseen falls.
Human falls occur very rarely; this makes it difficult to employ supervised classification techniques. Moreover, the sensing modality used must preserve the identity of those being monitored. In this paper, we investigate the use of thermal camera for fall detection, since it effectively masks the identity of those being monitored. We formulate the fall detection problem as an anomaly detection problem and aim to use autoencoders to identify falls. We also present a new anomaly scoring method to combine the reconstruction score of a frame across different video sequences. Our experiments suggests that Convolutional LSTM autoencoders perform better than convolutional and deep autoencoders in detecting unseen falls.
People living with dementia (PLwD) often exhibit behavioral and psychological symptoms, such as episodes of agitation and aggression. Agitated behavior in PLwD causes distress and increases the risk of injury to both patients and caregivers. In this paper, we present the use of a multi-modal wearable device that captures motion and physiological indicators to detect agitation in PLwD. We identify features extracted from sensor signals that are the most relevant for agitation detection. We hypothesize that combining multi-modal sensor data will be more effective to identify agitation in PLwD in comparison to a single sensor. The results of this unique pilot study are based on 17 participants' data collected during 600 days from PLwD admitted to a Specialized Dementia Unit. Our findings show the importance of using multi-modal sensor data and highlight the most significant features for agitation detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.