Military cyber operations are increasingly integrating or relying to a specific degree on AI-based systems in one or more moments of their phases where stakeholders are involved. Although the planning and execution of such operations are complex and well-thought processes that take place in silence and with high velocity, their implications and consequences could be experienced not only by their targeted entities, but also by other collateral friendly, non-friendly, or neutral ones. This calls for a broader military-technical and socio-ethical approach when building, conducting, and assessing military Cyber Operations to make sure that the aspects and factors considered and the choices and decisions made in these phases are fair, transparent, and accountable for the stakeholders involved in these processes and the ones impacted by their actions and largely, the society. This resonates with facts currently tackled in the area of Responsible AI, an upcoming critical research area in the AI field that is scarcely present in the ongoing discourses, research, and applications in the military cyber domain. On this matter, this research aims to define and analyse Responsible AI in the context of cyber military operations with the intention of further bringing important aspects to both academic and practitioner communities involved in building and/or conducting such operations. It does that by considering a transdisciplinary approach and concrete examples captured in different phases of their life cycle. Accordingly, a definition is advanced, the components and entities involved in building responsible intelligent systems are analysed, and further challenges, solutions, and future research lines are discussed. Hence, this would allow the agents involved to understand what should be done, what they are allowed to do, and further propose and build corresponding strategies, programs, and solutions e.g., education, modelling and simulation for properly tackling, building, and applying responsible intelligent systems in the military cyber domain.
The ongoing decade was believed to be a peaceful one. However, contemporary conflicts, and in particular, ongoing wars prove the opposite as they show the increase in context complexity when defining their goals as well as execution strategies for building means and methods for achieving them by gaining advantage against their adversaries through the engagement of well-established targets. At the core of the engagement decision relies the principle of proportionality which brings in a direct relation the expected unintended effects on civilian side with the anticipated intended effects on military side. While the clusters of effects involved in the proportionality assessment are clear, the process itself is subjective, governed by different dimensions of uncertainty, and represents the responsibility of military Commanders. Thus, a complex socio-technical process where different clusters of influential factors (e.g., military, technical, socio-ethical) play a role in the decisions made. Having said that, the objective of this research is to capture and cluster these factors, and further to model their influence in the proportionality decision-making process. This decision support system produces military targeting awareness to the agents involved in the processes of building, executing, and assessing military operations. To accomplish the aim of this research, a Design Science Research methodological approach is taken for capturing and modelling the influential factors as a socio-technical artefact in the form of a Bayesian Belief Network (BBN) model. The model proposed is further evaluated through demonstration on three different cases in respect to real military operations incidents and scenarios existing in the scientific literature in this research field. Hence, through this demonstration, it is illustrated and interpreted how the factors identified influence proportionality decisions when assessing target engagement as being proportional or disproportional. In these cases, corresponding measures for strengthening proportionality and reducing disproportionality in military operations are considered.
The outlook of military cyber operations is changing due to the prospects of data generation and accessibility, continuous technological advancements and their (public) availability, technological and human (inter)connections increase, plus the dynamism, needs, diverse nature, perspectives, and skills of experts involved in their planning, execution, and assessment phases respecting (inter)national aims, demands, and trends. Such operations are daily conducted and recently empowered by AI to reach or protect their targets and deal with the unintended effects produced through their engagement on them and/or collateral entities. However, these operations are governed and surrounded by different uncertainty levels e.g., intended effects prediction, consideration of effective alternatives, and understanding new dimensions of possible (strategic) future(s). Hence, the legality and ethicality of such operations should be assured; particularly, in Offensive Military Cyber Operations (OMCO), the agents involved in their design/deployment should consider, develop, and propose proper (intelligent) measures/methods. Such mechanisms can be built embedding intelligent techniques based on hardware, software, and communication data plus expert-knowledge through novel systems like digital twins. While digital twins find themselves in their infancy in military, cyber, and AI academic research and discourses, they started to show their modelling and simulation potential and effective real-time decision support in different industry applications. Nevertheless, this research aims to (i) understand what digital twins mean in OMCO context while embedding explainable AI and responsible AI perspectives, and (ii) capture challenges and benefits of their development. Accordingly, a multidisciplinary stance is considered through extensive review in the domains involved packaged in a design framework meant to assist the agents involved in their development and deployment.
The rapid grow and use of different social platforms enhanced communication between different entities and their audiences plus the transformation through digitalization of existing, e.g., ideas and businesses, or the creation of new ones fully existing or depending on this digital environment. Nevertheless, next to these promising aspects, social media is a vulnerable digital environment where a diverse plethora of cyber incidents are planned and executed engaging a diverse range of targets. Among these, social media manipulation through threats like disinformation and misinformation produce a broad span of effects that cross digital borders into the human realm by influencing and altering human believes, behaviour, and attitudes towards specific ideas, institutions, or people. To tackle these issues, existing academic, social platforms, dedicated organizations, and institutions efforts exist for building specific advanced and intelligent solutions for detecting and preventing them. Regardless, these efforts embed defender’s perspective and are focused locally, at target level, without being designed to fit a broader agenda of producing and/or strengthening social media security awareness. On this behalf, this research proposes a deep learning-based disinformation detection solution for facilitating and/or enhancing social media security awareness in respect to offender’s perspective. To achieve this objective, a Data Science approach is taken based on the Design Science Research methodology, and the results obtained are discussed with a keen on further field developments regarding intelligent, transparent, and responsible solutions countering social manipulation through realistic participation and contribution of different stakeholders from different disciplines.
Through technological advancements as well as due to societal trends and developments, social media became an active part and a catalysator of the ongoing conflicts and wars carried out in the physical environment. A direct example on this behalf are the cyber/information operations currently conducted in conjunction with the ongoing Russian-Ukrainian war. Due to such operations packaged in social media manipulation mechanisms like disinformation and misinformation using techniques such as controversies, fake news, and deep fakes, a high degree of confusion and uncertainty surrounds the events happened and users’ behaviour and beliefs. These operations also impact the civilians directly affected in the battlefield or their dear and known ones. To tackle this issue, currently limited scientific and objective effort is dedicated in this direction due to, e.g., data, strategic, and emotional implications. It is then the aim of this research to capture the main topics discussed and the feeling expressed by Ukrainian Telegram users on the ongoing Russian-Ukrainian war in 2022 using a Data Science approach by building a series of Machine Learning models based on multi-channel data collected in the first six of months of war. Accordingly, this research directly aims to contribute to efforts on understanding real discourses and dynamics involved in the ongoing conflict through direct resources, producing and sustaining social media security awareness, and building resilience to social media manipulation campaigns using AI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.