2021 IEEE International Conference on Big Data (Big Data) 2021
DOI: 10.1109/bigdata52589.2021.9671751
|View full text |Cite
|
Sign up to set email alerts
|

Citywide reconstruction of cross-sectional traffic flow from moving camera videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…C. Vehicle detection and tracking 1) Vehicle detection using YOLOv7: For the detection of vehicles inside the CARLA driving simulator, we use the vehicle orientation dataset [4] to train a YOLOv7 model [8] to detect both vehicle class and orientation. The main reason to choose YOLOv7 to train the vehicle detection neural network is that YOLOv7 is significantly lightweight (75% reduction in parameters with 1.5% higher AP for the same base model) compared to its real predecessor YOLOv4 [13], [14] while achieving improved benchmark results on the COCO dataset [15].…”
Section: B Carla Reid Datasetmentioning
confidence: 99%
See 4 more Smart Citations
“…C. Vehicle detection and tracking 1) Vehicle detection using YOLOv7: For the detection of vehicles inside the CARLA driving simulator, we use the vehicle orientation dataset [4] to train a YOLOv7 model [8] to detect both vehicle class and orientation. The main reason to choose YOLOv7 to train the vehicle detection neural network is that YOLOv7 is significantly lightweight (75% reduction in parameters with 1.5% higher AP for the same base model) compared to its real predecessor YOLOv4 [13], [14] while achieving improved benchmark results on the COCO dataset [15].…”
Section: B Carla Reid Datasetmentioning
confidence: 99%
“…We use the base model of YOLOv7 with pre-trained weights on the COCO dataset with an input size of 640×640. We first train with the real-world images from the vehicle orientation dataset [4] for 100 epochs with a learning rate of 0.001 on four Tesla A100 GPUs [16] and then use the synthetic vehicle orientation dataset [6] to fine-tune the next ten epochs with a reduced learning rate of 0.0001 to prevent large changes in the parameters. It should be noted that the vehicle orientation dataset contains 15 classes of vehicles, while the synthetic vehicle orientation dataset has 12 classes of vehicles due to the absence of bus class; thus, bus front, bus back, and bus side classes are not present.…”
Section: B Carla Reid Datasetmentioning
confidence: 99%
See 3 more Smart Citations