PurposeHuman behavior recognition poses a pivotal challenge in intelligent computing and cybernetics, significantly impacting engineering and management systems. With the rapid advancement of autonomous systems and intelligent manufacturing, there is an increasing demand for precise and efficient human behavior recognition technologies. However, traditional methods often suffer from insufficient accuracy and limited generalization ability when dealing with complex and diverse human actions. Therefore, this study aims to enhance the precision of human behavior recognition by proposing an innovative framework, dynamic graph convolutional networks with multi-scale position attention (DGCN-MPA) to sup.Design/methodology/approachThe primary applications are in autonomous systems and intelligent manufacturing. The main objective of this study is to develop an efficient human behavior recognition framework that leverages advanced techniques to improve the prediction and interpretation of human actions. This framework aims to address the shortcomings of existing methods in handling the complexity and variability of human actions, providing more reliable and precise solutions for practical applications. The proposed DGCN-MPA framework integrates the strengths of convolutional neural networks and graph-based models. It innovatively incorporates wavelet packet transform to extract time-frequency characteristics and a MPA module to enhance the representation of skeletal node positions. The core innovation lies in the fusion of dynamic graph convolution with hierarchical attention mechanisms, which selectively attend to relevant features and spatial relationships, adjusting their importance across scales to address the variability in human actions.FindingsTo validate the effectiveness of the DGCN-MPA framework, rigorous evaluations were conducted on benchmark datasets such as NTU-RGB + D and Kinetics-Skeleton. The results demonstrate that the framework achieves an F1 score of 62.18% and an accuracy of 75.93% on NTU-RGB + D and an F1 score of 69.34% and an accuracy of 76.86% on Kinetics-Skeleton, outperforming existing models. These findings underscore the framework’s capability to capture complex behavior patterns with high precision.Originality/valueBy introducing a dynamic graph convolutional approach combined with multi-scale position attention mechanisms, this study represents a significant advancement in human behavior recognition technologies. The innovative design and superior performance of the DGCN-MPA framework contribute to its potential for real-world applications, particularly in integrating behavior recognition into engineering and autonomous systems. In the future, this framework has the potential to further propel the development of intelligent computing, cybernetics and related fields.