Recently, deepfake videos, generated by deep learning algorithms, have attracted widespread attention. Deepfake technology can be used to perform face manipulation with high realism. So far, there have been a large amount of deepfake videos circulating on the Internet, most of which target at celebrities or politicians. These videos are often used to damage the reputation of celebrities and guide public opinion, greatly threatening social stability. Although the deepfake algorithm itself has no attributes of good or evil, this technology has been widely used for negative purposes. To prevent it from threatening human society, a series of research have been launched, including developing detection methods and building large-scale benchmarks. This review aims to demonstrate the current research status of deepfake video detection, especially, generation process, several detection methods and existing benchmarks. It has been revealed that current detection methods are still insufficient to be applied in real scenes, and further research should pay more attention to the generalization and robustness.This is an open access article under the terms of the Creative Commons Attribution-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited and no modifications or adaptations are made.
Coronaviruses are a well-known family of viruses that can infect humans or animals. Recently, the new coronavirus (COVID-19) has spread worldwide. All countries in the world are working hard to control the coronavirus disease. However, many countries are faced with a lack of medical equipment and an insufficient number of medical personnel because of the limitations of the medical system, which leads to the mass spread of diseases. As a powerful tool, artificial intelligence (AI) has been successfully applied to solve various complex problems ranging from big data analysis to computer vision. In the process of epidemic control, many algorithms are proposed to solve problems in various fields of medical treatment, which is able to reduce the workload of the medical system. Due to excellent learning ability, AI has played an important role in drug development, epidemic forecast, and clinical diagnosis. This research provides a comprehensive overview of relevant research on AI during the outbreak and helps to develop new and more powerful methods to deal with the current pandemic.
Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness detection schemes by leveraging this property in this paper. Extensive evaluations are made with three existing adversarial methods: FGSM, MI-FGSM, and Deepfool. We also proposed an adversarial attack method that enhances the robustness of adversarial fingerprint images to various transformations like rotations and flip. We demonstrate these outstanding schemes are likely to classify fake fingerprints as live fingerprints by adding tiny perturbations, even without internal details of their used model. The experimental results reveal a big loophole and threats for these schemes from a view of security, and enough attention is urgently needed to be paid on anti-adversarial not only in fingerprint liveness detection but also in all deep learning applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.