This paper reviews the NTIRE 2020 challenge on real image denoising with focus on the newly introduced dataset, the proposed methods and their results. The challenge is a new version of the previous NTIRE 2019 challenge on real image denoising that was based on the SIDD benchmark. This challenge is based on a newly collected validation and testing image datasets, and hence, named SIDD+. This challenge has two tracks for quantitatively evaluating image denoising performance in (1) the Bayer-pattern rawRGB and (2) the standard RGB (sRGB) color spaces. Each track ∼250 registered participants. A total of 22 teams, proposing 24 methods, competed in the final phase of the challenge. The proposed methods by the participating teams represent the current state-of-the-art performance in image denoising targeting real noisy images. The newly collected SIDD+ datasets are publicly available at: https://bit.ly/siddplus_data. A. Abdelhamed (kamel@eecs.yorku.ca, York University), M. Afifi, R. Timofte, and M.S. Brown are the NTIRE 2020 challenge organizers, while the other authors participated in the challenge. Appendix A contains the authors' teams and affiliations. NTIRE webpage: arXiv:2005.04117v1 [cs.CV] 8 May 2020
Recently, deep neural network (DNN) based methods for low-dose CT have been investigated to achieve excellent performance in both image quality and computational speed. However, almost all methods using DNNs for low-dose CT require clean ground truth data with full radiation dose to train the DNNs. In this work, we attempt to train DNNs for low-dose CT reconstructions with reduced tube current by investigating unsupervised training of DNNs for denoising sensor measurements or sinograms without full-dose ground truth images. In other words, our proposed methods allow training of DNNs with only noisy low-dose CT measurements. First, the Poisson Unbiased Risk Estimator (PURE) is investigated to train a DNN for denoising CT measurements, and a method is proposed for reconstructing the CT image using filtered back-projection (FBP) and the DNN trained with PURE. Then, the CT forward model-based Weighted Stein's Unbiased Risk Estimator (WSURE) is proposed to train a DNN for denoising CT sinograms and to subsequently reconstruct the CT image using FBP. Our proposed methods achieve excellent performance in both fast computation and reconstructed image quality, which is more comparable to the results of the DNNs trained with full-dose ground truth data than other state-of-the-art denoising methods such as the BM3D, Deep Image Prior, and Deep Decoder. Contents Contents ii List of Figures iv List of Tables viI.LIST OF FIGURES 4.3 Visual results of the denoised sinograms obtained using various methods for I 0 = 4 . The red and yellow boxes represent the enlarged views and their corresponding residuals, respectively. The numbers in orange in the images indicate the RMSE values. The intensity ranges were (-1000, 3000) and (-100,150
Compressive sensing is a method to recover the original image from undersampled measurements. In order to overcome the ill-posedness of this inverse problem, image priors are used such as sparsity in the wavelet domain, minimum total-variation, or self-similarity. Recently, deep learning based compressive image recovery methods have been proposed and have yielded state-of-the-art performances. They used deep learning based data-driven approaches instead of hand-crafted image priors to solve the ill-posed inverse problem with undersampled data. Ironically, training deep neural networks for them requires "clean" ground truth images, but obtaining the best quality images from undersampled data requires well-trained deep neural networks. To resolve this dilemma, we propose novel methods based on two well-grounded theories: denoiser-approximate message passing and Stein's unbiased risk estimator. Our proposed methods were able to train deep learning based image denoisers from undersampled measurements without ground truth images and without image priors, and to recover images with state-of-the-art qualities from undersampled data. We evaluated our methods for various compressive sensing recovery problems with Gaussian random, coded diffraction pattern, and compressive sensing MRI measurement matrices. Our methods yielded state-of-theart performances for all cases without ground truth images and without image priors. They also yielded comparable performances to the methods with ground truth data.
This paper reviews the Challenge on Super-Resolution of Compressed Image and Video at AIM 2022. This challenge includes two tracks. Track 1 aims at the super-resolution of compressed image, and Track 2 targets the super-resolution of compressed video. In Track 1, we use the popular dataset DIV2K as the training, validation and test sets.In Track 2, we propose the LDV 3.0 dataset, which contains 365 videos, including the LDV 2.0 dataset (335 videos) and 30 additional videos. In this challenge, there are 12 teams and 2 teams that submitted the final results to Track 1 and Track 2, respectively. The proposed methods and solutions gauge the state-of-the-art of super-resolution on compressed image and video. The proposed LDV 3.0 dataset is available at https: //github.com/RenYang-home/LDV_dataset. The homepage of this challenge is at https://github.com/RenYang-home/AIM22_CompressSR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.