2020
DOI: 10.1007/s00146-019-00927-6
|View full text |Cite
|
Sign up to set email alerts
|

On conflicts between ethical and logical principles in artificial intelligence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 7 publications
0
12
0
Order By: Relevance
“…D'Acquisto (2020) emphasizes the value of a balanced perspective in the battle between performance and explainability and points out that while a “certain level of transparency” (as opposed to a “certain level of autonomy”) of black box AI helps reduce mistrust in the system, the quest for explainability and transparency should not destabilize other principles and logical constraints (p. 899). Unreasonable and unjust AI explainability requirements can be a disincentive for innovation, especially as innovations are often protected by specific intellectual property laws with limited openness to external stakeholders.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…D'Acquisto (2020) emphasizes the value of a balanced perspective in the battle between performance and explainability and points out that while a “certain level of transparency” (as opposed to a “certain level of autonomy”) of black box AI helps reduce mistrust in the system, the quest for explainability and transparency should not destabilize other principles and logical constraints (p. 899). Unreasonable and unjust AI explainability requirements can be a disincentive for innovation, especially as innovations are often protected by specific intellectual property laws with limited openness to external stakeholders.…”
Section: Resultsmentioning
confidence: 99%
“…Extant literature widely assumes that only humans should be regarded as responsible agents (Adadi and Berrada, 2018). The idea that all responsibility can be allocated to humans and not machines assumes that AI systems are not stakeholders and have no interests to defend (D'Acquisto, 2020). This idea suggests that regulations and ethics are only applicable to humans and that it is humans' responsibility to allow or avoid the point of no return of AI autonomy, and therefore humans are the only responsible agents for AI system decisions.…”
Section: Perspectives For Managing Explainability Of Aimentioning
confidence: 99%
See 2 more Smart Citations
“…In other words, it has a clear purpose, which implies justification as one requirement for beneficence (Morley et al, 2020 ). Of course, one has to point out that the notion of goodness, which is at the core of the beneficence principle, is far from being objective both on the individual and superordinate levels (D’Acquisto, 2020 ). On the individual (customer) level, predictions of future choices based on patterns of customers’ past choices and preferences of similar other customers through recommender systems can be considered as a surrogate for social influence (Cappella, 2017 ).…”
Section: The Ethics Of Ai In Marketingmentioning
confidence: 99%