2021
DOI: 10.3390/electronics10131565
|View full text |Cite
|
Sign up to set email alerts
|

Head Detection Based on DR Feature Extraction Network and Mixed Dilated Convolution Module

Abstract: Pedestrian detection for complex scenes suffers from pedestrian occlusion issues, such as occlusions between pedestrians. As well-known, compared with the variability of the human body, the shape of a human head and their shoulders changes minimally and has high stability. Therefore, head detection is an important research area in the field of pedestrian detection. The translational invariance of neural network enables us to design a deep convolutional neural network, which means that, even if the appearance a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 33 publications
0
10
0
Order By: Relevance
“…For the dataset SCUT-HEAD PartA, YOLOv3 can achieve an AP value of 80.53%. The reference [18] added the mixed dilated convolution module (MDC) to YOLOv3 to improve the detection accuracy of YOLOv3 to a certain extent, reaching an AP value of 90.0%. The reference [19] proposed an end-to-end semi-supervised head detection framework that introduces weak box generation branches and weak box refinement branches for coarse localization and precise adjustment of the target box, pioneering a new approach to semi-supervised head detection that achieves an AP value of 85.66%.…”
Section: Experimental Analysis Of Scut-head Parta Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…For the dataset SCUT-HEAD PartA, YOLOv3 can achieve an AP value of 80.53%. The reference [18] added the mixed dilated convolution module (MDC) to YOLOv3 to improve the detection accuracy of YOLOv3 to a certain extent, reaching an AP value of 90.0%. The reference [19] proposed an end-to-end semi-supervised head detection framework that introduces weak box generation branches and weak box refinement branches for coarse localization and precise adjustment of the target box, pioneering a new approach to semi-supervised head detection that achieves an AP value of 85.66%.…”
Section: Experimental Analysis Of Scut-head Parta Datasetmentioning
confidence: 99%
“…As shown in Table 3, different experiments on SCUT-HEAD PartA dataset and Brainwash dataset were done in References [18] and [19]. Reference [18] proposed a feature extraction network DR-Net based on Darknet-53 to increase the information transmission rate between convolution layers and extract more semantic information. Reference [19] proposed an end-to-end semi-supervised head detection framework, which introduced a weak box generation branch and a weak box refinement branch to generate pseudo ground truth labels for unlabelled images based on labelled images.…”
Section: Experimental Analysis Of Brainwash Datasetmentioning
confidence: 99%
“…Zhang et al (2020) added a multi-scale atrous convolution module to enlarge the receptive field of feature layers and enhance the learning ability of the network. Liu et al (2021) designed mixed dilated convolution with different sampling rates, which expanded the receptive field and improved the small object detection performance. Huang et al (2022b) designed a novel Parallel-insight Convolution layer to extract information from different domains, which was integrated with a Spatial-Temporal Dual-Attention unit to extract high-quality global spatial–temporal features.…”
Section: Related Workmentioning
confidence: 99%
“…Zhao Weidong et al achieved the target detection of steel defects by the average background model [3]. Compared with hybrid Gaussian background modeling and the average background model, the ViBe algorithm has good fault tolerance, high computing speed, and detection accuracy [4]. erefore, the ViBe algorithm is used as the target detection algorithm in this paper for sports action recognition.…”
Section: Related Workmentioning
confidence: 99%