We present a fully convolutional network(FCN) based approach for color image restoration. FCNs have recently shown remarkable performance for high-level vision problem like semantic segmentation. In this paper, we investigate if FCN models can show promising performance for low-level problems like image restoration as well. We propose a fully convolutional model, that learns a direct end-to-end mapping between the corrupted images as input and the desired clean images as output. Our proposed method takes inspiration from domain transformation techniques but presents a data-driven task specific approach where filters for novel basis projection, task dependent coefficient alterations, and image reconstruction are represented as convolutional networks. Experimental results show that our FCN model outperforms traditional sparse coding based methods and demonstrates competitive performance compared to the state-of-the-art methods for image denoising. We further show that our proposed model can solve the difficult problem of blind image inpainting and can produce reconstructed images of impressive visual quality.
Analyzing aesthetic quality of images is a highly challenging task because of its subjectiveness. With the exponential rise of digital images in social media, it is of great demand to assess the aesthetics of images for several multimedia applications such as increasing social popularity etc. Previous approaches to address this problem have used hand-designed features or automated features extracted by deep convolutional neural network architectures. In this paper we predict the aesthetics of images by using the inferential information depending on the visual content found in an image. To the best of our knowledge, this is the rst attempt to address such problem by using tags predicted. Experimental results show that our proposed method outperforms the traditional machine learning methods and demonstrate competitive performance compared to the state-of-the-art methods of image aesthetics prediction.
We present an image inpainting technique using frequency-domain information.Prior works on image inpainting predict the missing pixels by training neural networks using only the spatial-domain information. However, these methods still struggle to reconstruct highfrequency details for real complex scenes, leading to a discrepancy in color, boundary artifacts, distorted patterns, and blurry textures. To alleviate these problems, we investigate if it is possible to obtain better performance by training the networks using frequency-domain information (discrete Fourier transform) along with the spatial-domain information. To this end, we propose a frequency-based deconvolution module that enables the network to learn the global context while selectively reconstructing the high-frequency components. We evaluate our proposed method on the publicly available datasets: celebFaces attribute (CelebA) dataset, Paris streetview, and describable textures dataset and show that our method outperforms current stateof-the-art image inpainting techniques both qualitatively and quantitatively. © The Authors.Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.