Recently, advances in noninvasive detection techniques have shown that it is possible to decode visual information from measurable brain activities. However, these studies typically focused on the mapping between neural activities and visual information, such as the image or video stimulus, on the individual level.Here, the common decoding models across individuals that classifying behavior tasks from brain signals were investigated. We proposed a cross-subject decoding approach using deep transfer learning (DTL) to decipher the behavior tasks from functional magnetic resonance imaging (fMRI) recording during subjects performing different tasks. We connected parts of the state-of-the-art networks pre-trained on the ImageNet dataset to our defined adaption layers to classify the behavior tasks from fMRI data. Our experiments on the Human Connectome Project (HCP) dataset showed that the proposed method achieved a higher decoding accuracy across subjects than the previous studies. We also conducted an experiment on five subsets of HCP data, which further demonstrated that our DTL approach is more effective on small dataset than the traditional methods.INDEX TERMS Neural decoding, functional magnetic resonance imaging, cross-subject, deep transfer learning.
In this paper, we propose a new method for detecting abnormal human behavior based on skeleton features using self-attention augment graph convolution. The skeleton data have been proved to be robust to the complex background, illumination changes, and dynamic camera scenes and are naturally constructed as a graph in non-Euclidean space. Particularly, the establishment of spatial temporal graph convolutional networks (ST-GCN) can effectively learn the spatio-temporal relationships of Non-Euclidean Structure Data. However, it only operates on local neighborhood nodes and thereby lacks global information. We propose a novel spatial temporal self-attention augmented graph convolutional networks (SAA-Graph) by combining improved spatial graph convolution operator with a modified transformer self-attention operator to capture both local and global information of the joints. The spatial self-attention augmented module is used to understand the intra-frame relationships between human body parts. As far as we know, we are the first group to utilize self-attention for video anomaly detection tasks by enhancing spatial temporal graph convolution. Moreover, to validate the proposed model, we performed extensive experiments on two large-scale publicly standard datasets (i.e., ShanghaiTech Campus and CUHK Avenue datasets) which reveal the state-of-art performance for our proposed approach when compared to existing skeleton-based methods and graph convolution methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.