The rapid grow and use of different social platforms enhanced communication between different entities and their audiences plus the transformation through digitalization of existing, e.g., ideas and businesses, or the creation of new ones fully existing or depending on this digital environment. Nevertheless, next to these promising aspects, social media is a vulnerable digital environment where a diverse plethora of cyber incidents are planned and executed engaging a diverse range of targets. Among these, social media manipulation through threats like disinformation and misinformation produce a broad span of effects that cross digital borders into the human realm by influencing and altering human believes, behaviour, and attitudes towards specific ideas, institutions, or people. To tackle these issues, existing academic, social platforms, dedicated organizations, and institutions efforts exist for building specific advanced and intelligent solutions for detecting and preventing them. Regardless, these efforts embed defender’s perspective and are focused locally, at target level, without being designed to fit a broader agenda of producing and/or strengthening social media security awareness. On this behalf, this research proposes a deep learning-based disinformation detection solution for facilitating and/or enhancing social media security awareness in respect to offender’s perspective. To achieve this objective, a Data Science approach is taken based on the Design Science Research methodology, and the results obtained are discussed with a keen on further field developments regarding intelligent, transparent, and responsible solutions countering social manipulation through realistic participation and contribution of different stakeholders from different disciplines.
In its essence, social media is on its way of representing the superposition of all digital representations of human concepts, ideas, believes, attitudes, and experiences. In this realm, the information is not only shared, but also {mis, dis}interpreted either unintentionally or intentionally guided by (some kind of) awareness, uncertainty, or offensive purposes. This can produce implications and consequences such as societal and political polarization, and influence or alter human behaviour and beliefs. To tackle these issues corresponding to social media manipulation mechanisms like disinformation and misinformation, a diverse palette of efforts represented by governmental and social media platforms strategies, policies, and methods plus academic and independent studies and solutions are proposed. However, such solutions are based on a technical standpoint mainly on gaming or AI-based techniques and technologies, but often only consider the defender’s perspective and address in a limited way the social perspective of this phenomenon becoming single angled. To address these issues, this research combines the defenders’ perspective with the one of the offenders by (i) building a hybrid deep learning disinformation generation and detection model and (ii) capturing and proposing a set of design recommendations that could be considered when establishing patterns, requirements, and features for building future gaming and AI-based solutions for combating social media manipulation mechanisms. This is done using the Design Science Research methodology in Data Science approach aiming at enhancing security awareness and resilience against social media manipulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.