2023
DOI: 10.2139/ssrn.4372349
|View full text |Cite
|
Sign up to set email alerts
|

More Human than Human: Measuring ChatGPT Political Bias

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
28
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(33 citation statements)
references
References 23 publications
4
28
0
1
Order By: Relevance
“…The research types vary from short, explorative papers commenting on chat interview protocols, setting the output of mainly ChatGPT into context (Biswas 2023a;Du et al 2023a;Iskender 2023;Lund and Wang 2023;Neves 2022;Wang et al 2023a;Wang et al 2023a), to broad, multidisciplinary perspectives on a variety of topics, implications and industry analysis, primarily conducted by research collectives (e.g., Dwivedi et al 2023)). In addition, other studies and publications focus on overarching topics, for example ethical discussions (e.g., Motoki et al 2023;Youvan 2023;Zhuo et al 2023).…”
Section: Descriptive Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The research types vary from short, explorative papers commenting on chat interview protocols, setting the output of mainly ChatGPT into context (Biswas 2023a;Du et al 2023a;Iskender 2023;Lund and Wang 2023;Neves 2022;Wang et al 2023a;Wang et al 2023a), to broad, multidisciplinary perspectives on a variety of topics, implications and industry analysis, primarily conducted by research collectives (e.g., Dwivedi et al 2023)). In addition, other studies and publications focus on overarching topics, for example ethical discussions (e.g., Motoki et al 2023;Youvan 2023;Zhuo et al 2023).…”
Section: Descriptive Resultsmentioning
confidence: 99%
“…Google CEO Pichai warns against rush to deploy AI without oversight and demands "(…) strong regulations to avert harmful effects" (Love 2023) and accentuated that the development of AI should include not just engineers, but social scientists, philosophers and so on, to ensure the alignment with human values and morality (Dean 2023). Some of the main concerns are the following: ethical biases inherently rooted in the data that the large datasets were trained on could reflect historical or societal biases in the GAI such as racial or gender prejudices (Motoki et al 2023;Youvan 2023;Zhuo et al 2023), manipulation and disinformation scams intellectual property theft (Yurkevich 2023) and, lastly, the-for now dystopian-idea of a post-humanism area with general AI dominance. All these aspects require consideration, despite the enormous upside potential for innovation.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, standard limitations in the everyday use of LLMs also apply to their usage for classification tasks. Biases inherent in the training of these models (Bisbee et al, 2023;Motoki et al, 2024) may seep into text annotation, especially ones more specific or contentious than the classifications done here. Researchers should be mindful of these potential biases and carefully consider their impact on potential outcomes.…”
Section: Discussionmentioning
confidence: 97%
“…These biases may arise from the fact that ChatGPT models have been trained on human-generated text and reinforcement learning from human feedback to better align with human values [74,75]. In particular, ChatGPT outputs could potentially contain biases toward political leanings [76][77][78][79]. These possible biases are unlikely to have affected the results of Study 1, because ChatGPT was tasked to provide numeric scores and not to generate new ideas.…”
Section: Discussionmentioning
confidence: 99%