2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018
DOI: 10.1109/itsc.2018.8569387
|View full text |Cite
|
Sign up to set email alerts
|

Dark Model Adaptation: Semantic Image Segmentation from Daytime to Nighttime

Abstract: This work addresses the problem of semantic image segmentation of nighttime scenes. Although considerable progress has been made in semantic image segmentation, it is mainly related to daytime scenarios. This paper proposes a novel method to progressive adapt the semantic models trained on daytime scenes, along with large-scale annotations therein, to nighttime scenes via the bridge of twilight time -the time between dawn and sunrise, or between sunset and dusk. The goal of the method is to alleviate the cost … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
191
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 194 publications
(191 citation statements)
references
References 30 publications
(49 reference statements)
0
191
0
Order By: Relevance
“…Mean IoU increases remarkably for the nighttime images in the BDD testing set and Nighttime Driving test set, and keeps the same level of accuracy as the baseline for daytime images. Compared to the method proposed by D. Dai et al,32 our method rises nearly 4% on the same Examples from BDD Dataset and Nighttime Driving Dataset (Top two rows: BDD Dataset, Bottom two rows: Nighttime Driving Dataset). In general, our method (Converting 2,000 images to synthtic nighttime images in training) performs better than ordinary ERF-PSPNet(trained in original BDD10K training set).…”
mentioning
confidence: 71%
See 1 more Smart Citation
“…Mean IoU increases remarkably for the nighttime images in the BDD testing set and Nighttime Driving test set, and keeps the same level of accuracy as the baseline for daytime images. Compared to the method proposed by D. Dai et al,32 our method rises nearly 4% on the same Examples from BDD Dataset and Nighttime Driving Dataset (Top two rows: BDD Dataset, Bottom two rows: Nighttime Driving Dataset). In general, our method (Converting 2,000 images to synthtic nighttime images in training) performs better than ordinary ERF-PSPNet(trained in original BDD10K training set).…”
mentioning
confidence: 71%
“…The results of second method: converting parts of the daytime images in training set to nighttime images on the stage of training, are shown in last four rows, together with a baseline method DarkModelAdaptation proposed by D. Dai et al32 validated on their dataset. Mean IoU increases remarkably for the nighttime images in the BDD testing set and Nighttime Driving test set, and keeps the same level of accuracy as the baseline for daytime images.…”
mentioning
confidence: 99%
“…As a type of domain adaptation technique, domain unification is the holy grail of visual perception, theoretically allowing models trained on samples with limited heterogeneity to perform adequately on scenes that are well out of the distribution of the training data. Domain unification can be applied within the vast distribution of natural images [1], [2], [3], between natural and synthetic images (computer-generated, whether through traditional 3D rendering or more modern GAN-based techniques) [4], [5] and even between different sensor modalities [6]. Additionally, domain unification can be implemented at different stages of a computer vision pipeline, ranging from direct approaches such as domain confusion [7], [8], [9], fine-tuning models on target domains [1] or mixture-of-expert approaches [10], etc.…”
Section: Introductionmentioning
confidence: 99%
“…Most approaches attempt to solve this shortcoming by using 3D-rendered simulations that Authors are from the Oxford Robotics Institute, University of Oxford, UK. {horia, tombruls, pnewman}@robots.ox.ac.uk programatically provide ground-truth [11], [12], or by using unsupervised techniques that adapt models based on auxiliary or proxy tasks [7], [8], [9], [13], [1].…”
Section: Introductionmentioning
confidence: 99%
“…However, the VPGNet is limited to tasks of detecting and recognizing markings on the road which have regular patterns (i.e., markings on the roads have very similar shapes, colors and textures), while less semantics can be extracted comparing to object detection and semantic segmentation. The DarkModel takes twilight images and nighttime images with semantic segmentation label and use twilight images as intermediate stage to train network to get a better semantic segmentation map of nighttime images, but the method is just a process of fine‐turning network.…”
Section: Introductionmentioning
confidence: 99%