2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00902
|View full text |Cite
|
Sign up to set email alerts
|

Test-Time Fast Adaptation for Dynamic Scene Deblurring via Meta-Auxiliary Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(23 citation statements)
references
References 33 publications
0
23
0
Order By: Relevance
“…Then, given a test sample, TTT will first train the model using this sample with self-supervisions and then use the updated model for final prediction. After that, the idea of TTT has been applied to many real-world applications, such as human pose estimation [12], dynamic scene deblurring [3], and longtailed learning [48]. Compared with TTT that is designed for improving the model performance on out-of-distribution (OOD) test samples, our method is more general as it 1) not only can be applied to existing OOD generalization methods to further boost the performance 2) but also improves the predictive performance of any pre-trained model on test samples that have the same distribution with training samples.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Then, given a test sample, TTT will first train the model using this sample with self-supervisions and then use the updated model for final prediction. After that, the idea of TTT has been applied to many real-world applications, such as human pose estimation [12], dynamic scene deblurring [3], and longtailed learning [48]. Compared with TTT that is designed for improving the model performance on out-of-distribution (OOD) test samples, our method is more general as it 1) not only can be applied to existing OOD generalization methods to further boost the performance 2) but also improves the predictive performance of any pre-trained model on test samples that have the same distribution with training samples.…”
Section: Related Workmentioning
confidence: 99%
“…Threshold in Eqn. (3). We evaluate CLI with different , selected from {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}, based on ResNet-18.…”
Section: Ablationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Test-Time Adaptation (TTA) aims to enable quick adaptation of an existing model to new target data without having access to the source domain data the model was trained on. As an important challenge for dealing with dynamic domain shift in real-world, TTA is attracting more and more attention in several tasks [4,15,19,25,31]. Among them, Test-time Training (TTT) [25] updates model parameters in an online manner by applying a self-supervised proxy task on the test data.…”
Section: Related Workmentioning
confidence: 99%
“…Cho et al [8] presented a multi-input multi-output U-net (MIMO-UNet) which utilizes a single U-Net (i.e., encoder-decoder with short connections) but multiple input and output images to handle the coarse-to-fine image deblurring. Chi et al [7] utilized an encoder-decoder network to extract multiscale image features, and then integrated the auxiliary and meta learning to enhance the deblurring performance. Chen et al [4] also applied encoder-decoder architecture to implement multi-scale and multi-stage image restoration tasks by introducing a new normalization method.…”
Section: Related Work 21 Image Deblurringmentioning
confidence: 99%