Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
PurposeThe abnormal behaviors of staff at petroleum stations pose significant safety hazards. Addressing the challenges of high parameter counts, lengthy training periods and low recognition rates in existing 3D ResNet behavior recognition models, this paper proposes GTB-ResNet, a network designed to detect abnormal behaviors in petroleum station staff.Design/methodology/approachFirstly, to mitigate the issues of excessive parameters and computational complexity in 3D ResNet, a lightweight residual convolution module called the Ghost residual module (GhostNet) is introduced in the feature extraction network. Ghost convolution replaces standard convolution, reducing model parameters while preserving multi-scale feature extraction capabilities. Secondly, to enhance the model's focus on salient features amidst wide surveillance ranges and small target objects, the triplet attention mechanism module is integrated to facilitate spatial and channel information interaction. Lastly, to address the challenge of short time-series features leading to misjudgments in similar actions, a bidirectional gated recurrent network is added to the feature extraction backbone network. This ensures the extraction of key long time-series features, thereby improving feature extraction accuracy.FindingsThe experimental setup encompasses four behavior types: illegal phone answering, smoking, falling (abnormal) and touching the face (normal), comprising a total of 892 videos. Experimental results showcase GTB-ResNet achieving a recognition accuracy of 96.7% with a model parameter count of 4.46 M and a computational complexity of 3.898 G. This represents a 4.4% improvement over 3D ResNet, with reductions of 90.4% in parameters and 61.5% in computational complexity.Originality/valueSpecifically designed for edge devices in oil stations, the 3D ResNet network is tailored for real-time action prediction. To address the challenges posed by the large number of parameters in 3D ResNet networks and the difficulties in deployment on edge devices, a lightweight residual module based on ghost convolution is developed. Additionally, to tackle the issue of low detection accuracy of behaviors amidst the noisy environment of petroleum stations, a triple attention mechanism is introduced during feature extraction to enhance focus on salient features. Moreover, to overcome the potential for misjudgments arising from the similarity of actions, a Bi-GRU model is introduced to enhance the extraction of key long-term features.
PurposeThe abnormal behaviors of staff at petroleum stations pose significant safety hazards. Addressing the challenges of high parameter counts, lengthy training periods and low recognition rates in existing 3D ResNet behavior recognition models, this paper proposes GTB-ResNet, a network designed to detect abnormal behaviors in petroleum station staff.Design/methodology/approachFirstly, to mitigate the issues of excessive parameters and computational complexity in 3D ResNet, a lightweight residual convolution module called the Ghost residual module (GhostNet) is introduced in the feature extraction network. Ghost convolution replaces standard convolution, reducing model parameters while preserving multi-scale feature extraction capabilities. Secondly, to enhance the model's focus on salient features amidst wide surveillance ranges and small target objects, the triplet attention mechanism module is integrated to facilitate spatial and channel information interaction. Lastly, to address the challenge of short time-series features leading to misjudgments in similar actions, a bidirectional gated recurrent network is added to the feature extraction backbone network. This ensures the extraction of key long time-series features, thereby improving feature extraction accuracy.FindingsThe experimental setup encompasses four behavior types: illegal phone answering, smoking, falling (abnormal) and touching the face (normal), comprising a total of 892 videos. Experimental results showcase GTB-ResNet achieving a recognition accuracy of 96.7% with a model parameter count of 4.46 M and a computational complexity of 3.898 G. This represents a 4.4% improvement over 3D ResNet, with reductions of 90.4% in parameters and 61.5% in computational complexity.Originality/valueSpecifically designed for edge devices in oil stations, the 3D ResNet network is tailored for real-time action prediction. To address the challenges posed by the large number of parameters in 3D ResNet networks and the difficulties in deployment on edge devices, a lightweight residual module based on ghost convolution is developed. Additionally, to tackle the issue of low detection accuracy of behaviors amidst the noisy environment of petroleum stations, a triple attention mechanism is introduced during feature extraction to enhance focus on salient features. Moreover, to overcome the potential for misjudgments arising from the similarity of actions, a Bi-GRU model is introduced to enhance the extraction of key long-term features.
PurposeIn autonomous driving, the inherent sparsity of point clouds often limits the performance of object detection, while existing multimodal architectures struggle to meet the real-time requirements for 3D object detection. Therefore, the main purpose of this paper is to significantly enhance the detection performance of objects, especially the recognition capability for small-sized objects and to address the issue of slow inference speed. This will improve the safety of autonomous driving systems and provide feasibility for devices with limited computing power to achieve autonomous driving.Design/methodology/approachBRTPillar first adopts an element-based method to fuse image and point cloud features. Secondly, a local-global feature interaction method based on an efficient additive attention mechanism was designed to extract multi-scale contextual information. Finally, an enhanced multi-scale feature fusion method was proposed by introducing adaptive spatial and channel interaction attention mechanisms, thereby improving the learning of fine-grained features.FindingsExtensive experiments were conducted on the KITTI dataset. The results showed that compared with the benchmark model, the accuracy of cars, pedestrians and cyclists on the 3D object box improved by 3.05, 9.01 and 22.65%, respectively; the accuracy in the bird’s-eye view has increased by 2.98, 10.77 and 21.14%, respectively. Meanwhile, the running speed of BRTPillar can reach 40.27 Hz, meeting the real-time detection needs of autonomous driving.Originality/valueThis paper proposes a boosting multimodal real-time 3D object detection method called BRTPillar, which achieves accurate location in many scenarios, especially for complex scenes with many small objects, while also achieving real-time inference speed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.