2023
DOI: 10.1038/s41598-023-31341-0
|View full text |Cite
|
Sign up to set email alerts
|

ChatGPT’s inconsistent moral advice influences users’ judgment

Abstract: ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
30
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 76 publications
(36 citation statements)
references
References 18 publications
5
30
1
Order By: Relevance
“…While we hypothesize that ChatGPT's diminished performance in the July 2023 exams might stem from exposure to new test questions, it is also possible that its inherent inconsistencies contributed to the initially observed low scores in July 2023. 40 , 41 Third, we did not evaluate the appropriateness or logical consistency of ChatGPT's reasoning for each question. Fourth, we relied on official answers provided on the Ministry of Examination's website in Taiwan as the benchmark for correctness.…”
Section: Discussionmentioning
confidence: 99%
“…While we hypothesize that ChatGPT's diminished performance in the July 2023 exams might stem from exposure to new test questions, it is also possible that its inherent inconsistencies contributed to the initially observed low scores in July 2023. 40 , 41 Third, we did not evaluate the appropriateness or logical consistency of ChatGPT's reasoning for each question. Fourth, we relied on official answers provided on the Ministry of Examination's website in Taiwan as the benchmark for correctness.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, ChatGPT incorporates a degree of randomness, resulting in variations in its responses even when faced with the same question asked repeatedly. 41 However, large language models like ChatGPT predict each subsequent word based on the preceding context. This allows for multitude of ways to express the same idea with different phrasings.…”
Section: Discussionmentioning
confidence: 99%
“…Ultimately, scientists may one day outsource the coding itself, but will still need to be trained in how to prompt AI tools appropriately (Zamfirescu-Pereira et al, 2023), how to assess the validity of their outputs (Passi and Vorvoreanu, n.d.; Zombies in the Loop? 2023), and to consider the societal implications and applications of these outputs (Tomašev et al, 2020;Krügel et al, 2023).…”
Section: E14-6mentioning
confidence: 99%