2023
DOI: 10.1109/tpami.2022.3200245
|View full text |Cite
|
Sign up to set email alerts
|

TransFuser: Imitation With Transformer-Based Sensor Fusion for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 137 publications
(35 citation statements)
references
References 100 publications
0
19
0
Order By: Relevance
“…We measured the running time of our model on a single RTX 3090 GPU by averaging over all time steps of the evaluation route, as shown in Table 5 . Some of data in Table 5 are quoted from work (Chitta et al, 2022 ). Our model takes 35.3 ms per frame, an increase of 11.8 ms and 7.7 ms compared to LF(23.5 ms) and TF(27.6 ms), respectively.…”
Section: Methodsmentioning
confidence: 99%
“…We measured the running time of our model on a single RTX 3090 GPU by averaging over all time steps of the evaluation route, as shown in Table 5 . Some of data in Table 5 are quoted from work (Chitta et al, 2022 ). Our model takes 35.3 ms per frame, an increase of 11.8 ms and 7.7 ms compared to LF(23.5 ms) and TF(27.6 ms), respectively.…”
Section: Methodsmentioning
confidence: 99%
“…Most recently, a driving simulator built for behavioral research, DReyeVR [18], was presented -building upon the well-established CARLA driving simulator with added data collection, VR functionality, and hardware integration. DReyeVR's compatibility with CARLA makes VR user studies for driving democratized and thus opens doors for research which is compatible with many existing state-of-the-art autonomous driving methods, especially those built off Learning By Cheating [19], [20], [21].…”
Section: B Modeling Driving Behaviormentioning
confidence: 99%
“…Moreover, our control method generalizes well to the viewpoint changes due to effective data augmentation. Furthermore, [14] and its extension [16] deploy an additional LiDAR sensor along with the desired goal position to predict the future waypoints of the vehicle at inference time. In our work, we only use RGB images to determine the position and orientation of the goal (target vehicle) at inference and do not require extra LiDAR sensor to be present in the vehicle setup.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we use the CARLA driving simulator (version 0.9.11) [5] for our experiments. CARLA has been widely adapted for online evaluation in control algorithms such as [13], [14], [16] due to a wide range of sensors that can be attached to the ego-vehicle for both data collection and inference, as well as having maps with different terrains and routes. It also furnishes precise ground truth labels for various tasks such as semantic segmentation, optical flow, dense depth, etc.…”
Section: Experiments a Experimental Setupmentioning
confidence: 99%