Adversarial domain adaptation has been recently introduced as an effective technique for textual matching tasks, such as question deduplication (Shah et al., 2018). Here we investigate the use of gradient reversal on adversarial domain adaptation to explicitly learn both shared and unshared (domain specific) representations between two textual domains. In doing so, gradient reversal learns features that explicitly compensate for domain mismatch, while still distilling domain specific knowledge that can improve target domain accuracy. We evaluate reversing gradients for adversarial adaptation on multiple domains, and demonstrate that it significantly outperforms other methods on question deduplication as well as on recognizing textual entailment (RTE) tasks, achieving up to 7% absolute boost in base model accuracy on some datasets.
This paper proposes an approach for detecting deepfake videos using Resnext CNN and LSTM. The proposed approach involves training a Resnext CNN on a dataset of real and deepfake videos to classify them accurately. The Resnext CNN takes video frames as input and outputs a probability score for each frame, which is then fed into an LSTM to model the temporal dynamics of the video. The approach was evaluated on a dataset of real and deepfake videos and achieved promising results. The proposed approach can be used to detect deepfake videos, which can help in preventing the spread of misinformation and safeguarding our society.
In the world of ever-expanding social media platforms, deepfakes are seen as the biggest threat posed by AI. There are many scenarios where realistic deepfakes with face swaps are used to create political pranks, and fake terrorist incidents and people intimidation are easy to imagine. Examples include Brad Pitt and Morgan Freeman fake videos. Advances in computing power have made deep-learning algorithm so powerful that it has become so easy to create human-synthesized indistinguishable videos, commonly known as deepfakes. It's easy to imagine scenarios where this realistic face could be traded for a fake video to do political pain, fake terrorist attacks and intimidation. This work describes a survey on novel deep learning- based techniques that are effective in telling apart phoney and authentic videos produced by AI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.