2021
DOI: 10.1109/tpami.2020.2996538
|View full text |Cite
|
Sign up to set email alerts
|

Bridging the Gap Between Computational Photography and Visual Recognition

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 34 publications
(15 citation statements)
references
References 101 publications
0
15
0
Order By: Relevance
“…Preprocessing for high-level vision tasks. As suggested in [49,42,8], SID has become a frequently-used preprocessing step, so we further examine whether our method bring benefits to downstream high-level vision tasks. Here, we randomly select 100 real-world hazy images from RTTS [20] with object categories and bounding boxes.…”
Section: Other Applicationsmentioning
confidence: 99%
“…Preprocessing for high-level vision tasks. As suggested in [49,42,8], SID has become a frequently-used preprocessing step, so we further examine whether our method bring benefits to downstream high-level vision tasks. Here, we randomly select 100 real-world hazy images from RTTS [20] with object categories and bounding boxes.…”
Section: Other Applicationsmentioning
confidence: 99%
“…As pointed out by a plenty of recent works (Wang et al 2016;Liu et al , 2019Liu et al , 2020Scheirer et al 2020;Yang et al 2020;Hahner et al 2019), the performance of high-level computer vision tasks, such as object detection and recognition, will deteriorate in the presence of various sensory and environmental degradation. In particular, Sakaridis et al (2018) studied the effect of image dehazing on semantic segmentation by a synthesized Foggy Cityscapes dataset with 20,550 images.…”
Section: Task-driven Evaluation Setsmentioning
confidence: 99%
“…Following the success of the UG 2 challenge on this topic held at IEEE/CVF CVPR 2018 [41,42], a new challenge with an emphasis on video was organized at CVPR 2019. The UG 2 + 2019 Challenge provided an integrated forum for researchers to evaluate recent progress in handling various adverse visual conditions in real-world scenes, in robust, effective and task-oriented ways.…”
Section: * Denotes Equal Contributionmentioning
confidence: 99%
“…The main goal of this track is to correct visual aberrations present in video in order to improve the classification results obtained with out-of-the-box classification algorithms. For this we adapted the evaluation method and metrics provided in [42] to take into account the temporal factor of the data present in the UG 2 dataset. Below we introduce the adapted training and testing datasets, as well as the evaluation metrics and baseline classification results for this task 2 .…”
Section: Object Classification Improvement In Videomentioning
confidence: 99%