2021
DOI: 10.1109/mis.2020.2993266
|View full text |Cite
|
Sign up to set email alerts
|

A Single-Stream Segmentation and Depth Prediction CNN for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(16 citation statements)
references
References 13 publications
0
16
0
Order By: Relevance
“…e results show that the proposed model has better generalization and convergence speed than the original AE network. [79], which has been widely used in computer vision [80,81], speech recognition [82], and other fields [83]. e typical structure of CNN is shown in Figure 13.…”
Section: Output Layermentioning
confidence: 99%
“…e results show that the proposed model has better generalization and convergence speed than the original AE network. [79], which has been widely used in computer vision [80,81], speech recognition [82], and other fields [83]. e typical structure of CNN is shown in Figure 13.…”
Section: Output Layermentioning
confidence: 99%
“…The driverless vision understanding is realised based on 3D [15]. Aladem and Rawashdeh [16] have used the single‐stream encoding and decoding structure to realise the multi‐task prediction. Atapour‐Abarghouei and Breckon [17] further studied the depth estimation after getting rid of the shackles of 3D.…”
Section: Related Workmentioning
confidence: 99%
“…High-quality depth maps are required in a wide variety of tasks in computer vision and graphics, such as RGB-D scene reconstruction [ 1 , 2 ], augmented reality [ 3 , 4 , 5 ] and autonomous driving [ 6 , 7 , 8 ]. Compared to standard RGB-sensors, depth sensors often produce noisy images, which makes depth-reconstruction tasks especially challenging, since every task also has to account for the different task-specific depth uncertainties or deficiencies.…”
Section: Introductionmentioning
confidence: 99%