In this work we employ multitask learning to capitalize on the structure that exists in related supervised tasks to train complex neural networks. It allows training a network for multiple objectives in parallel, in order to improve performance on at least one of them by capitalizing on a shared representation that is developed to accommodate more information than it otherwise would for a single task. We employ this idea to tackle action recognition in egocentric videos by introducing additional supervised tasks. We consider learning the verbs and nouns from which action labels consist of and predict coordinates that capture the hand locations and the gaze-based visual saliency for all the frames of the input video segments. This forces the network to explicitly focus on cues from secondary tasks that it might otherwise have missed resulting in improved inference. Our experiments on EPIC-Kitchens and EGTEA Gaze+ show consistent improvements when training with multiple tasks over the single-task baseline. Furthermore, in EGTEA Gaze+ we outperform the state-of-the-art in action recognition by 3.84%. Apart from actions, our method produces accurate hand and gaze estimations as side tasks, without requiring any additional input at test time other than the RGB video clips.
Egocentric vision is an emerging field of computer vision that is characterized by the acquisition of images and video from the first person perspective. In this paper we address the challenge of egocentric human action recognition by utilizing the presence and position of detected regions of interest in the scene explicitly, without further use of visual features.Initially, we recognize that human hands are essential in the execution of actions and focus on obtaining their movements as the principal cues that define actions. We employ object detection and region tracking techniques to locate hands and capture their movements. Prior knowledge about egocentric views facilitates hand identification between left and right. With regard to detection and tracking, we contribute a pipeline that successfully operates on unseen egocentric videos to find the camera wearer's hands and associate them through time. Moreover, we emphasize on the value of scene information for action recognition. We acknowledge that the presence of objects is significant for the execution of actions by humans and in general for the description of a scene. To acquire this information, we utilize object detection for specific classes that are relevant to the actions we want to recognize.Our experiments are targeted on videos of kitchen activities from the Epic-Kitchens dataset. We model action recognition as a sequence learning problem of the detected spatial positions in the frames. Our results show that explicit hand and object detections with no other visual information can be relied upon to classify hand-related human actions. Testing against methods fully dependent on visual features, signals that for actions where hand motions are conceptually important, a region-ofinterest-based description of a video contains equally expressive information with comparable classification performance.
Inconsistent findings between laboratories are hampering scientific progress and are of increasing public concern. Differences in laboratory environment is a known factor contributing to poor reproducibility of findings between research sites, and well-controlled multisite efforts are an important next step to identify the relevant factors needed to reduce variation in study outcome between laboratories. Through harmonization of apparatus, test protocol, and aligned and non-aligned environmental variables, the present study shows that behavioral pharmacological responses in Shank2 knockout (KO) rats, a model of synaptic dysfunction relevant to autism spectrum disorders, were highly replicable across three research centers. All three sites reliably observed a hyperactive and repetitive behavioral phenotype in KO rats compared to their wild-type littermates as well as a dose-dependent phenotype attenuation following acute injections of a selective mGluR1 antagonist. These results show that reproducibility in preclinical studies can be obtained and emphasizes the need for high quality and rigorous methodologies in scientific research. Considering the observed external validity, the present study also suggests mGluR1 as potential target for the treatment of autism spectrum disorders.
Extracting information about emotion from heart rate in real life is challenged by the concurrent effect of physical activity on heart rate caused by metabolic need. “Non-metabolic heart rate,” which refers to the heart rate that is caused by factors other than physical activity, may be a more sensitive and more universally applicable correlate of emotion than heart rate itself. The aim of the present article is to explore the evidence that non-metabolic heart rate, as it has been determined up until now, indeed reflects emotion. We focus on methods using accelerometry since these sensors are readily available in devices suitable for daily life usage. The evidence that non-metabolic heart rate as determined by existing methods reflect emotion is limited. Alternative possible routes are explored. We conclude that for real-life cases, estimating the type and intensity of activities based on accelerometry (and other information), and in turn use those to determine the non-metabolic heart rate for emotion is most promising.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.