2020
DOI: 10.3390/jimaging6120142
|View full text |Cite
|
Sign up to set email alerts
|

High-Profile VRU Detection on Resource-Constrained Hardware Using YOLOv3/v4 on BDD100K

Abstract: Vulnerable Road User (VRU) detection is a major application of object detection with the aim of helping reduce accidents in advanced driver-assistance systems and enabling the development of autonomous vehicles. Due to intrinsic complexity present in computer vision and to limitations in processing capacity and bandwidth, this task has not been completely solved nowadays. For these reasons, the well established YOLOv3 net and the new YOLOv4 one are assessed by training them on a huge, recent on-road image data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…This technique enables the model to learn how to recognize objects at a smaller scale than usual and considerably reduces the demand for large batch sizes during training. This approach not only enhances the model's ability to detect small targets and improve its generalization ability but also enriches the feature information in the images (Ortiz et al., 2020). In this study, chicken part images were collected and a dataset was established.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This technique enables the model to learn how to recognize objects at a smaller scale than usual and considerably reduces the demand for large batch sizes during training. This approach not only enhances the model's ability to detect small targets and improve its generalization ability but also enriches the feature information in the images (Ortiz et al., 2020). In this study, chicken part images were collected and a dataset was established.…”
Section: Resultsmentioning
confidence: 99%
“…In contrast, YOLOV3 did not perform as well as these models. Unlike the FPN layer used in YOLOV3, YOLOV4 incorporates a This approach not only enhances the model's ability to detect small targets and improve its generalization ability but also enriches the feature information in the images (Ortiz et al, 2020). In this study, chicken part images were collected and a dataset was established.…”
Section: Comparison Of the Performance Of Different Detection Methodsmentioning
confidence: 99%
“…Table 3 gives the AP50 and mAP of the proposed method and six other classical objection detection algorithms. For AD-Faster-RCNN [25], SSD [16], YOLOv4-416 [28], and YOLOv4, as there are no detection data for traffic lights, only their pedestrian and vehicle detection results are provided, and the number of their detection categories N = 2. For MS-DAYOLO [26], YOLOv5, YOLOv6, YOLOv7-tiny [27], and the method in this paper, N = 3.…”
Section: Detection Resultsmentioning
confidence: 99%
“…Shuiye Wu proposed a YOLOX based network model for multi-scale object detection tasks in complex scenes [27]. Vicent ortiz castell ó In order to help reduce accidents in the advanced driver assistance system, the original Leaky ReLU convolution activation function in the original YOLO implementation is replaced by the cutting-edge activation function in the YOLOv4 network to improve the detection performance [28]. When identifying obstacles in front of the blind, the obstacle detection algorithms mentioned above often have missed detection and false detection due to the diverse types of obstacles and complex conditions, such as occlusion, low contrast between target and background, and small target size.…”
Section: Introductionmentioning
confidence: 99%
“…From text analysis [1], [2] or pedestrian detection [3], [4] to healthcare [5], [6], it is unquestionable that Artificial Intelligence (AI) has become more and more useful in almost…”
Section: Introductionmentioning
confidence: 99%