2019
DOI: 10.1007/978-3-030-27005-6_3
|View full text |Cite
|
Sign up to set email alerts
|

Orthogonality-Based Disentanglement of Responsibilities for Ethical Intelligent Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(22 citation statements)
references
References 13 publications
0
22
0
Order By: Relevance
“…Combat robots: Work on combat robots which may not have lethal weapons on board can be found in Elands, et al [25], Aliman & Kester [26], and Aliman, et al [27]. A major risk of such weapons it that ware gets aut of control if these become operational in large numbers while fighting one-another rather than human opponents.…”
Section: Sex Robotsmentioning
confidence: 99%
See 1 more Smart Citation
“…Combat robots: Work on combat robots which may not have lethal weapons on board can be found in Elands, et al [25], Aliman & Kester [26], and Aliman, et al [27]. A major risk of such weapons it that ware gets aut of control if these become operational in large numbers while fighting one-another rather than human opponents.…”
Section: Sex Robotsmentioning
confidence: 99%
“…A major risk of such weapons it that ware gets aut of control if these become operational in large numbers while fighting one-another rather than human opponents. Aliman, et al [27], Sayler [32].…”
Section: Sex Robotsmentioning
confidence: 99%
“…Legal parameters but also legal norms and rules to limit the action space should be integrated in this process. In order to craft such societal-level augmented utility functions (also called ethical goal functions [16]), society would have to integrate scientific insights and facilitate the experience of counterfactual scenarios assisted for instance by VR and AR technology. Considering each cluster of instantiated dyadic cognitive templates and each perceiver, an algorithm could assimilate the corresponding relevant human-defined parameters with the human-defined weights and calculate the cardinal context-sensitive utility of the given scenario on which artificial intelligent systems could maximize.…”
Section: Goal Specificationmentioning
confidence: 99%
“…For the governance of artificial intelligent systems which is a field of interest within both AI safety and AI ethics at an international level [365], it becomes crucial to design an appropriate goal specification framework able to encode the ethical and legal requirements within a given societal context. In this regard, different solutions have been proposed ranging from rule-based frameworks to methods based on updatable context-sensitive and perceiver-dependent ethical utility functions formulated at the societal level called ethical goal functions [16] (see Chapter 5). However, for any case in a given domain in which a society is supposed to contribute to implement a framework of AI governance, a process of ethical self-assessment attempting to unambiguously provide answers to the question on what society wants arises as necessity.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation