A hallmark behavioral feature of fragile X syndrome (FXS) is the propensity for individuals with the syndrome to exhibit significant impairments in social gaze during interactions with others. However, previous studies employing eye tracking methodology to investigate this phenomenon have been limited to presenting static photographs or videos of social interactions rather than employing a real-life social partner. To improve upon previous studies, we used a customized eye tracking configuration to quantify the social gaze of 51 individuals with FXS and 19 controls, aged 14–28 years, while they engaged in a naturalistic face-to-face social interaction with a female experimenter. Importantly, our control group was matched to the FXS group on age, developmental functioning, and degree of autistic symptomatology. Results showed that participants with FXS spent significantly less time looking at the face and had shorter episodes (and longer inter-episodes) of social gaze than controls. Regression analyses indicated that communication ability predicted higher levels of social gaze in individuals with FXS, but not in controls. Conversely, degree of autistic symptoms predicted lower levels of social gaze in controls, but not in individuals with FXS. Taken together, these data indicate that naturalistic social gaze in FXS can be measured objectively using existing eye tracking technology during face-to-face social interactions. Given that impairments in social gaze were specific to FXS, this paradigm could be employed as an objective and ecologically valid outcome measure in ongoing Phase II/Phase III clinical trials of FXS-specific interventions.
This paper proposes a framework to discover activities in an unsupervised manner, and add semantics with minimal supervision. The framework uses basic trajectory information as input and goes up to video interpretation. The work reduces the gap between low-level information and semantic interpretation, building an intermediate layer composed of Primitive Events. The proposed representation for primitive events aims at capturing small meaningful motions over the scene with the advantage of being learnt in an unsupervised manner. We propose the discovery of an activity using these Primitive Events as the main descriptors. The activity discovery is done using only real tracking data. Semantics are added to the discovered activities and the recognition of activities (e.g., "Cooking", "Eating") can be automatically done with new datasets. Finally we validate the descriptors by discovering and recognizing activities in a home care application dataset.
Abstract. This work proposes a complete framework for human activity discovery, modeling, and recognition using videos. The framework uses trajectory information as input and goes up to video interpretation. The work reduces the gap between low-level vision information and semantic interpretation, by building an intermediate layer composed of Primitive Events. The proposed representation for primitive events aims at capturing meaningful motions (actions) over the scene with the advantage of being learned in an unsupervised manner. We propose the use of Primitive Events as descriptors to discover, model, and recognize activities automatically. The activity discovery is performed using only real tracking data. Semantics are added to the discovered activities (e.g., "Preparing Meal", "Eating") and the recognition of activities is performed with new datasets.
In this study, the authors propose a complete framework based on a hierarchical activity model to understand and recognise activities of daily living in unstructured scenes. At each particular time of a long-time video, the framework extracts a set of space-time trajectory features describing the global position of an observed person and the motion of his/ her body parts. Human motion information is gathered in a new feature that the authors call perceptual feature chunks (PFCs). The set of PFCs is used to learn, in an unsupervised way, particular regions of the scene (topology) where the important activities occur. Using topologies and PFCs, the video is broken into a set of small events ('primitive events') that have a semantic meaning. The sequences of 'primitive events' and topologies are used to construct hierarchical models for activities. The proposed approach has been tested with the medical field application to monitor patients suffering from Alzheimer's and dementia. The authors have compared their approach to their previous study and a rule-based approach. Experimental results show that the framework achieves better performance than existing works and has the potential to be used as a monitoring tool in medical field applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.