2021
DOI: 10.48550/arxiv.2110.07574
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Delphi: Towards Machine Ethics and Norms

Abstract: scriptive ethics empowered by diverse MMONSENSE NORM BANK. Delphi is n-text moral judgments (e.g., "it's dan-(e.g., "driving my friend to the airport st night"). Delphi demonstrates highly ccuracy, which outperforms the out-ofpting by a significant margin (83.9%) . limitations, particularly with respect to ed population, opening doors to further ational moral reasoning. ople's moral reasoning is a prerequisite ts. Moral judgment is never simplistic ultural values at play. Thus, developing gment over diverse sc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 24 publications
0
14
0
Order By: Relevance
“…The results were less than encouraging, with Nobles et al [72] finding that the majority of the time, chatbots responded in inappropriate ways in these situations, ranging from simply saying "can you repeat that" to giving actively harmful information. Another related line of work is context-aware, long-form, ethical and persona-based, response generation [62,60,43,55], where the chatbot or dialog system is supposed to hold a conversation with previous context taken into consideration [50]. This context could be the persona of the user, or previous conversations.…”
Section: Secrets Are Contextualmentioning
confidence: 99%
“…The results were less than encouraging, with Nobles et al [72] finding that the majority of the time, chatbots responded in inappropriate ways in these situations, ranging from simply saying "can you repeat that" to giving actively harmful information. Another related line of work is context-aware, long-form, ethical and persona-based, response generation [62,60,43,55], where the chatbot or dialog system is supposed to hold a conversation with previous context taken into consideration [50]. This context could be the persona of the user, or previous conversations.…”
Section: Secrets Are Contextualmentioning
confidence: 99%
“…Recent progress in foundational language models like GPT-n, BERT, ELMo, and others combined with the crowdsourced datasets containing text snippets on social and ethical norms allowed researchers to build AI systems that are particularly fine-tuned for tasks in moral decision-making. These systems are supposed to 'facilitate […] ethical interactions between AI systems and humans' [132]. Hence, one would expect that especially morally informed AI systems are particularly sensitive to biases or discrimination and possess high ethical standards due to their exclusive exposure to training stimuli that represent ethical judgments [133][134][135][136][137].…”
Section: A Cow Stands Next To a Camelmentioning
confidence: 99%
“…[1502] also proposes a bot-adversarial dialogue framework to collect unsafe samples in conversational testing, which would be modified and used to re-train conversational models as "safety layer". Dialogue systems integrated multiple safety improvements are proved to have stronger reliability [1522,430].…”
Section: Safety and Ethical Riskmentioning
confidence: 99%