2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) 2020
DOI: 10.1109/aivr50618.2020.00031
|View full text |Cite
|
Sign up to set email alerts
|

Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses

Abstract: Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice -including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(37 citation statements)
references
References 62 publications
0
37
0
Order By: Relevance
“…The aim of the research cluster is to demonstrate the feasibility of malicious AI design motivated by diverse adversarial goals across a variety of domains in order to foster safety-awareness. Beyond that, we consider 1 extra emerging risk pattern, namely automated disconcertion [28] which we introduce in a few paragraphs.…”
Section: Rda For Ai Risk Instantiations Ia and Ib-examplesmentioning
confidence: 99%
See 3 more Smart Citations
“…The aim of the research cluster is to demonstrate the feasibility of malicious AI design motivated by diverse adversarial goals across a variety of domains in order to foster safety-awareness. Beyond that, we consider 1 extra emerging risk pattern, namely automated disconcertion [28] which we introduce in a few paragraphs.…”
Section: Rda For Ai Risk Instantiations Ia and Ib-examplesmentioning
confidence: 99%
“…Interestingly, an already perceptible consequence of the mere existence of risk Ia instantiations containing the design of deepfake technologies already led to the emergence of a risk pattern which has been termed automated disconcertion [28]. Automated disconcertion can imply the intentional or also unintentional mislabelling of real samples as fake-for example, in the context of misleading conspiracy theories [50] or against the background of uncertain political settings as it was the case in Gabon not long ago [51].…”
Section: Rda For Ai Risk Instantiations Ia and Ib-examplesmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, a deepfake hologram portraying a movie celebrity sharing political propaganda which the celebrity themselves don't endorse, targeting fans and spreading lies about the incumbent leader's political opponents. The hologram could be made to harass or provoke viewers (Aliman and Kester, 2020), goading them into acting irrationally. This warrants ethical considerations when designing XR experiences for broadcasting and entertainment; 2) XR technology which can sense and interpret objects in the environment can be used to mask and/or delete recognized objects.…”
Section: Introductionmentioning
confidence: 99%