With the improvement of computer computing power and storage capacity, the emergence of massive data makes the methods based on human action feature detection and recognition unable to meet people’s needs due to poor generalization ability. Based on the detection and recognition of human action features based on deep learning algorithms, a suitable neural network can be constructed to identify locked human action features from surveillance video and analyze whether it is a specific behavior. In this paper, a deep learning algorithm is proposed to optimize the detection of human action features, and a multiview reobservation fusion action recognition model of 3D pose is designed. Several factors affecting the recognition of human action features are analyzed, and a detailed summary is made from the detection environment. Experiments show that adding one or two layers of feature attention enhancement to the multiview observation fusion network can improve the accuracy by 1% to 3%. In this way, the model can integrate action features from multiple observation angles to judge actions and learn to find observation angles suitable for action recognition, thereby improving the performance of action recognition.
With the development and leap of artificial intelligence technology, more and more robots have penetrated into all walks of life. Today’s agriculture is changing in the direction of modernization and automation. On the one hand, because of rising labor costs, it cannot afford to consume a large amount of labor for agricultural operations. On the other hand, the population is growing rapidly, and traditional agricultural production, picking, and other links have been unable to keep pace with the development of the times. Therefore, it is very necessary to use artificial intelligence technology to transform traditional agriculture. The purpose of this paper is to use artificial intelligence technology to plan, track, and optimize the displacement trajectory of the agricultural picking robot, so as to improve the working efficiency of the picking robot. In this paper, the neural network, the D-H modeling method of the manipulator, and the forward and reverse motion of the manipulator are explained and analyzed, and based on the relevant algorithms of neural network, the manipulator is modeled, and then the, forward and reverse motion of the manipulator is analyzed in detail, and the digital model of the picking robot is constructed. Then, the angle and motion speed of each joint of the robot are analyzed to reduce the motion trajectory error caused by friction and other factors. Then, the simulation experiment of the displacement trajectory tracking control is carried out, and the linear trajectory motion and the arc trajectory motion are deeply analyzed, and the axis error is greatly reduced after 6 iterations. Finally, the displacement trajectory is optimized. The optimized total movement time is shortened by 6.84 seconds, which enables the picking robot to not only ensure work efficiency but also accurately complete the planned displacement trajectory. After continuous experiments on the algorithm model and the picking robot, the actual trajectory of the picking robot at 0.7 seconds can be expected. The trajectories are completely coincident, indicating that the neural network plays a very important role in the trajectory research of picking robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.