We present a software (ETHOWATCHER(®)) developed to support ethography, object tracking and extraction of kinematic variables from digital video files of laboratory animals. The tracking module allows controlled segmentation of the target from the background, extracting image attributes used to calculate the distance traveled, orientation, length, area and a path graph of the experimental animal. The ethography module allows recording of catalog-based behaviors from environment or from video files continuously or frame-by-frame. The output reports duration, frequency and latency of each behavior and the sequence of events in a time-segmented format, set by the user. Validation tests were conducted on kinematic measurements and on the detection of known behavioral effects of drugs. This software is freely available at www.ethowatcher.ufsc.br.
Over the last few years, the use of new technologies for the support of elderly people and in particular dementia patients received increasing interest. We investigated the use of a video monitoring system for automatic event recognition for the assessment of instrumental activities of daily living (IADL) in dementia patients. Participants (19 healthy subjects (HC) and 19 mild cognitive impairment (MCI) patients) had to carry out a standardized scenario consisting of several IADLs such as making a phone call while they were recorded by 2D video cameras. After the recording session, data was processed by a platform of video signal analysis in order to extract kinematic parameters detecting activities undertaken by the participant. We compared our automated activity quality prediction as well as cognitive health prediction with direct observation annotation and neuropsychological assessment scores. With a sensitivity of 85.31% and a precision of 75.90%, the overall activities were correctly automatically detected. Activity frequency differed significantly between MCI and HC participants (p < 0.05). In all activities, differences in the execution time could be identified in the manually and automatically extracted data. We obtained statistically significant correlations between manually as automatically extracted parameters and neuropsychological test scores (p < 0.05). However, no significant differences were found between the groups according to the IADL scale. The results suggest that it is possible to assess IADL functioning with the help of an automatic video monitoring system and that even based on the extracted data, significant group differences can be obtained.
Behavior studies on the neurobiological effects of environmental, pharmacological and physiological manipulations in lab animals try to correlate these procedures with specific changes in animal behavior. Parameters such as duration, latency and frequency are assessed from the visually recorded sequences of behaviors, to distinguish changes due to manipulation. Since behavioral recording procedure is intrinsically interpretative, high variability in experimental results is expected and usual, due to observer-related influences such as experience, knowledge, stress, fatigue and personal biases. Here, we present a computer program that supports the assessment of inter- and intra-observer concordance, using statistical indices (e.g., Kappa and Kendal coefficients and concordance index). The software was tested in a case study with 4 different observers, naïve to behavioral recording procedures. On paired analysis, the higher agreement index achieved was 0.76 (concordance index) and 0.47 (Kappa Coefficient, where 0 is no agreement and 1 is total agreement). Observers showed poor concordance indices (lower than 0.7), emphasizing the concern on observer recording stability and on precise morphological definition of the recorded behaviors. These indices can also be used to train observers and to refine the behavioral catalogue definitions, as they are related to different behavioral recording aspects.
International audienceMany supervised approaches report state-of-the-art results for recognizing short-term actions in manually clipped videos by utilizing fine body motion information. The main downside of these approaches is that they are not applicable in real world settings. The challenge is different when it comes to unstructured scenes and long-term videos. Un-supervised approaches have been used to model the long-term activities but the main pitfall is their limitation to handle subtle differences between similar activities since they mostly use global motion information. In this paper, we present a hybrid approach for long-term human activity recognition with more precise recognition of activities compared to unsupervised approaches. It enables processing of long-term videos by automatically clipping and performing online recognition. The performance of our approach has been tested on two Activities of Daily Living (ADL) datasets. Experimental results are promising compared to existing approaches
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.