2021
DOI: 10.5281/zenodo.4681234
|View full text |Cite
|
Sign up to set email alerts
|

ultralytics/yolov3: v9.5.0 - YOLOv5 v5.0 release compatibility update for YOLOv3

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…ResNet101 [ 3 ] was used in other detectors as the backbone network. These detectors were trained for 2x epochs, using the data augment strategies that the default YOLO detectors used, such as mixup [ 44 ], mosaic [ 45 ], and photometric distortions. Other settings were left as the default public implements [ 22 , 23 , 45 , 46 ].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…ResNet101 [ 3 ] was used in other detectors as the backbone network. These detectors were trained for 2x epochs, using the data augment strategies that the default YOLO detectors used, such as mixup [ 44 ], mosaic [ 45 ], and photometric distortions. Other settings were left as the default public implements [ 22 , 23 , 45 , 46 ].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…For evaluation in combination with obstacle avoidance in close to real-life expected situations, we equipped the robot Qolo [22] with 2 Lidars (Velodyne VLP-16), and an RGBD sensor (Intel Realsense). Obstacle's tracking was performed through real-time people detection implemented through a pipeline of sensing fusion of Lidar-based detection by DR-SPAAM [28] and RGBD detection through YOLO [29]. The full controller repository can be found here: Qolo-ROS [30].…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…A key component for crowd navigation is the localization and tracking of the environment and crowd (Ξ). We have set people tracking through a real-time pipeline of sensing fusion of Lidar-based detection by DR-SPAAM [21] and RGBD detection by YOLO [22]. Whereas robot state (ξ) was estimated on optical flow principle from a stereo camera (Intel T265) fused with IMU and odometry.…”
Section: Introductionmentioning
confidence: 99%