Multimodal human behaviour recognition is a research hotspot in computer vision. To fully use both skeleton and depth data, this paper constructs a new multimodal network identification scheme combined with the self‐attention mechanism. The system comprises a transformer‐based skeleton self‐attention subnetwork and a depth self‐attention subnetwork based on CNN. In the skeleton self‐attention subnetwork, this paper proposes a motion synergy space feature that can integrate the information of each joint point according to the entirety and synergy of human motion and puts forward a quantitative standard for the contribution degree of each joint motion. In this paper, the results from the skeleton self‐attention subnetwork and the depth self‐attention subnetwork are integrated and they are verified on the NTU RGB+D and UTD‐MHAD datasets. The authors have achieved 90% recognition rate on UTD‐MHAD dataset, and the CS recognition rate of the authors’ method on the NTU RGB+D dataset reaches 90.5% and the recognition rate of CV is 94.7%. Experimental results show that the network structure proposed in this paper achieves a high recognition rate, and its performance is better than most current methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.