Motion deblurring is a challenging task in vision and graphics. Recent researches aim to deblur by using multiple sub-networks with multi-scale or multi-patch inputs. However, scaling or splitting operations on input images inevitably loses the spatial details of the images. Meanwhile, their models are usually complex and computationally expensive. To address these problems, we propose a novel variant-depth scheme. In particular, we utilize the multiple variant-depth sub-networks with scale-invariant inputs to combine into a variant-depth network (VDN). In our design, different levels of sub-networks accomplish progressive deblurring effects without transforming the inputs, thereby effectively reducing the computational complexity of the model. Extensive experiments have shown that our VDN outperforms the state-of-the-art motion deblurring methods while maintaining a lower computational cost.
Face recognition (FR) systems based on convolutional neural networks have shown excellent performance in human face inference. However, some malicious users may exploit such powerful systems to identify others' face images disclosed by victims' social network accounts, consequently obtaining private information. To address this emerging issue, synthesizing face protection images with visual and protective effects is essential. However, existing face protection methods encounter three critical problems: poor visual effect, limited protective effect, and trade‐off between visual and protective effects. To address these challenges, we propose a novel face protection approach in this article. Specifically, we design a generative adversarial network (GAN) framework with an autoencoder (AEGAN) as the generator to synthesize the protection images. It is worth noting that we introduce an interpolation upsampling module in the decoder in order to let the synthesized protection images evade recognition by powerful convolution‐based FR systems. Furthermore, we introduce an attention module with a perceptual loss in AEGAN to enhance the visual effects of synthesized images by AEGAN. Extensive experiments have shown that AEGAN not only can maintain the comfortable visual quality of synthesized images but also prevent the recognition of commercial FR systems, including Baidu and iKLYTEK.
In this paper, we present the multi-stage attentive network (MSAN), an efficient and good generalization performance convolutional neural network (CNN) architecture for motion deblurring. We build a multi-stage encoder–decoder network with self-attention and use the binary cross-entropy loss to train our model. In MSAN, there are two core designs. First, we introduce a new attention-based end-to-end method on top of multi-stage networks, which applies group convolution to the self-attention module, effectively reducing the computing cost and improving the model’s adaptability to different blurred images. Secondly, we propose using binary cross-entropy loss instead of pixel loss to optimize our model to minimize the over-smoothing impact of pixel loss while maintaining a good deblurring effect. We conduct extensive experiments on several deblurring datasets to evaluate the performance of our solution for deblurring. Our MSAN achieves superior performance while also generalizing and compares well with state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.