2018
DOI: 10.3390/s18103286
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptation and Adaptive Information Fusion for Object Detection on Foggy Days

Abstract: Foggy days pose many difficulties for outdoor camera surveillance systems. On foggy days, the optical attenuation and scattering effects of the medium significantly distort and degenerate the scene radiation, making it noisy and indistinguishable. Aiming to solve this problem, in this paper we propose a novel object detection method that has the ability to exploit the information in the color and depth domains. To prevent the error propagation problem, we clean the depth information before the training process… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…Domain adaptation can be understood as adapting the classifier trained on a source domain to recognize instances from a new target domain [7]. Chen [2] has proposed a domain adaptive Faster R-CNN to improve the cross-domain robustness of this work. Two domain adaptation components, on image level and instance level, are integrated into Faster R-CNN model to alleviate the domain discrepancy between them.…”
Section: Object Detection Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Domain adaptation can be understood as adapting the classifier trained on a source domain to recognize instances from a new target domain [7]. Chen [2] has proposed a domain adaptive Faster R-CNN to improve the cross-domain robustness of this work. Two domain adaptation components, on image level and instance level, are integrated into Faster R-CNN model to alleviate the domain discrepancy between them.…”
Section: Object Detection Modelmentioning
confidence: 99%
“…Specifically, DMask-RCNN places a domain-adaptive component branch after the base feature extraction convolution layers of Mask-RCNN. From the experiment results in [2] [16], these two domain adaptation methods are proved to have better performances over basic Faster R-CNN and Mask R-CNN when testing object detections on hazy images.…”
Section: Object Detection Modelmentioning
confidence: 99%
“…The visibility of the scene can be improved by removing haze/fog as well as by correcting the color shifts caused by the air-light. Appearance modeling is composed of identifying visual object features for better representation of a region of interest and effective construction of mathematical models to detect objects [11,12]. Degraded/ low-visible images makes object detection a more challenging computer vision problem with numerous real-world applications including human-computer interaction, autonomous vehicles, robotics, surveillance and security systems [13][14][15].…”
Section: Introductionmentioning
confidence: 99%
“…Indoor-outdoor camera surveillance systems [1,2] are widely used in urban areas, railway stations, airports, smart homes, and supermarkets. These systems play an important role in security management and traffic management [3].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the literature [ 2 , 4 ] has seen a growing interest in developing transfer learning (TL) or domain adaptation (DA) algorithms to minimize the distribution gap between domains, so that the structure or information available in the source domain can be effectively transferred to understand the structure available in the target domain. In previous work [ 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 ], two learning strategies for domain adaptation are considered independently: (1) instance re-weighting [ 9 , 10 , 11 , 12 ], which reduces the distribution gap between domains by re-weighting the source domain instances and then training the model with the re-weighted source domain data; (2) feature matching [ 5 , 6 , 8 , 13 , 14 ], which finds a common feature space across both domains by minimizing the distribution gap.…”
Section: Introductionmentioning
confidence: 99%