2015
DOI: 10.1080/15027570.2015.1069013
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Weapons Systems, the Frame Problem and Computer Security

Abstract: Unlike human soldiers, autonomous weapons systems (AWS) are unaffected by psychological factors that would cause them to act outside the chain of command. This is a compelling moral justification for their development and eventual deployment in war. To achieve this level of sophistication, the software that runs AWS will have to first solve two problems: the frame problem and the representation problem. Solutions to these problems will inevitably involve complex software. Complex software will create security … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0
4

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 20 publications
0
6
0
4
Order By: Relevance
“…It will become particularly constructive to consider the transparency of human actions and motivations in this respect [1,66]. This can be extended similarly to other AI systems such as autonomous weapons systems and the ability for those systems to understand non-verbal commands given by friendly combatants, or even non-friendly ones, such as in cases where enemy combatants or civilians surrender [67][68][69].…”
Section: Discussionmentioning
confidence: 99%
“…It will become particularly constructive to consider the transparency of human actions and motivations in this respect [1,66]. This can be extended similarly to other AI systems such as autonomous weapons systems and the ability for those systems to understand non-verbal commands given by friendly combatants, or even non-friendly ones, such as in cases where enemy combatants or civilians surrender [67][68][69].…”
Section: Discussionmentioning
confidence: 99%
“…How can autonomous weapons like drones in rescue missions be able to detect civilians from targets? Is it true that such an AI-powered system may end up harming innocent civilians at the expense of the targeted persons (Klincewicz, 2015)?…”
Section: Autonomous Systemsmentioning
confidence: 99%
“…This could lead to paralysis if the rules are meant to function as hard restraints, or if the rules are designed only as guidelines, this could open the door to robotic behavior that should be prohibited. A further and especially pressing issue concerns what is termed the "frame-problem" (Dennett 1984;Klincewicz 2015), namely to grasp the relevant features of a situation so the correct rules are applied.…”
Section: Background Considerationsmentioning
confidence: 99%