2018 IEEE Intelligent Vehicles Symposium (IV) 2018
DOI: 10.1109/ivs.2018.8500397
|View full text |Cite
|
Sign up to set email alerts
|

Understanding 3D Semantic Structure around the Vehicle with Monocular Cameras

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…Modern scene segmentation approaches are mostly designed to work with pinhole images on mainstream datasets like Cityscapes [19] and Mapillary Vistas [20]. To enlarge the Field of View (FoV), early surrounding perception systems are based on fisheye images or multiple cameras [4] [21]. Motivated by the prospect of attaining wide-angle semantic perception with a single camera, recent works [22][23] build directly on this modality, relying on synthetic collections that are far less diverse than pinhole databases [19] [20].…”
Section: B Panoramic Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Modern scene segmentation approaches are mostly designed to work with pinhole images on mainstream datasets like Cityscapes [19] and Mapillary Vistas [20]. To enlarge the Field of View (FoV), early surrounding perception systems are based on fisheye images or multiple cameras [4] [21]. Motivated by the prospect of attaining wide-angle semantic perception with a single camera, recent works [22][23] build directly on this modality, relying on synthetic collections that are far less diverse than pinhole databases [19] [20].…”
Section: B Panoramic Segmentationmentioning
confidence: 99%
“…This additional information about the surrounding is critical for many real-world applications such as autonomous driving and various other robotic navigation tasks. In particular, Intelligent Vehicles (IV) require an indepth understanding of the surrounding world [4].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Omnidirectional semantic segmentation. As systems progressed towards 360 • perception, early omnidirectional segmentation works were based on fisheye images [12,74] or focused on indoor scenes [54,56] with surround-view systems in street-scenes being largely based on a multitude of cameras [11,37,41,47]. Motivated by the prospect of attaining 360 • semantic perception based on a singular panoramic camera, recent works build semantic segmentation systems directly on top of this sensormodality [2,68,85].…”
Section: Related Workmentioning
confidence: 99%
“…However, they sacrificed partial vertical FoV for the purpose of preserving straight lines when segmenting unwarpped images on cylindrical projection surfaces. Analogously, K. Narioka et al 19 equipped a vehicle with five cameras equiangularly for a wider FoV to perceive surroundings. They designed a set of efficient Deep Neural Networks (DNNs) and trained only with front facing camera images.…”
Section: Related Workmentioning
confidence: 99%