We introduce a paradigm for nonlocal sparsity reinforced deep convolutional neural network denoising. It is a combination of a local multiscale denoising by a convolutional neural network (CNN) based denoiser and a nonlocal denoising based on a nonlocal filter (NLF) exploiting the mutual similarities between groups of patches. CNN models are leveraged with noise levels that progressively decrease at every iteration of our framework, while their output is regularized by a nonlocal prior implicit within the NLF. Unlike complicated neural networks that embed the nonlocality prior within the layers of the network, our framework is modular, it uses standard pre-trained CNNs together with standard nonlocal filters. An instance of the proposed framework, called NN3D, is evaluated over large grayscale image datasets showing state-of-the-art performance.Index Terms-image denoising, convolutional neural network, nonlocal filters, BM3D.
Single image super-resolution (SISR) is an ill-posed problem aiming at estimating a plausible high-resolution (HR) image from a single low-resolution image. Current state-of-the-art SISR methods are patch-based. They use either external data or internal self-similarity to learn a prior for an HR image. External data-based methods utilize a large number of patches from the training data, while self-similarity-based approaches leverage one or more similar patches from the input image. In this paper, we propose a self-similarity-based approach that is able to use large groups of similar patches extracted from the input image to solve the SISR problem. We introduce a novel prior leading to the collaborative filtering of patch groups in a 1D similarity domain and couple it with an iterative back-projection framework. The performance of the proposed algorithm is evaluated on a number of SISR benchmark data sets. Without using any external data, the proposed approach outperforms the current non-convolutional neural network-based methods on the tested data sets for various scaling factors. On certain data sets, the gain is over 1 dB, when compared with the recent method A+. For high sampling rate (x4), the proposed method performs similarly to very recent state-of-the-art deep convolutional network-based approaches.
While data parallelism aspects of OpenCL have been of primary interest due to the massively data parallel GPUs being on focus, OpenCL also provides powerful capabilities to describe task parallelism. In this article we study the task parallel concepts available in OpenCL and find out how well the different vendor-specific implementations can exploit task parallelism when the parallelism is described in various ways utilizing the command queues. We show that the vendor implementations are not yet capable of extracting kernel-level task parallelism from in-order queues automatically. To assess the potential performance benefits of in-order queue parallelization, we implemented such capabilities to an open source implementation of OpenCL. The evaluation was conducted by means of a case study of an advanced noise reduction algorithm described as a multi-kernel OpenCL application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.