2024
DOI: 10.1155/2024/7115633
|View full text |Cite
|
Sign up to set email alerts
|

The Self-Perception and Political Biases of ChatGPT

Jérôme Rutinowski,
Sven Franke,
Jan Endendyk
et al.

Abstract: This contribution analyzes the self-perception and political biases of OpenAI’s Large Language Model ChatGPT. Considering the first small-scale reports and studies that have emerged, claiming that ChatGPT is politically biased towards progressive and libertarian points of view, this contribution is aimed at providing further clarity on this subject. Although the concept of political bias and affiliation is hard to define, lacking an agreed-upon measure for its quantification, this contribution attempts to exam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…A common tenet is that genAI large language models are deemed capable of extracting and synthesising text [101][102][103]. As this is methodologically based on semantic proximity constructs, the results are not prone to biases [10,104], but may suffer from inverted logic [24]. Moreover, the effective summation and synthesis of text requires judgment as to which element to include, even if they are less dominant in terms of the overall bulk of text to be condensed.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…A common tenet is that genAI large language models are deemed capable of extracting and synthesising text [101][102][103]. As this is methodologically based on semantic proximity constructs, the results are not prone to biases [10,104], but may suffer from inverted logic [24]. Moreover, the effective summation and synthesis of text requires judgment as to which element to include, even if they are less dominant in terms of the overall bulk of text to be condensed.…”
Section: Discussionmentioning
confidence: 99%
“…Underlying this, however, any level of output can only ever be as good as the nature, diversity, quantity, and quality of the training data that were supplied, as well as any ethical frameworks that may have been deployed in the training and quality assurance process. Users need to be cognisant of the biases derived from the curation of the training sets [10,12], which requires transparency by the companies offering the AI products. It is not well understood, for example, that current models of genAI, such as ChatGPT, draw on 'authoritative' sources, which are, in fact, gleaned from the web, such as Wikipedia [5].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this large comparative study of several hundred survey questions, Atari and co-authors (Atari et al, 2023) find a near-perfect correlation between a WEIRDness scale for each country and the distance between the LLM prediction and the actual human response. Others confirmed this insight (Cao et al, 2023;Rutinowski et al, 2024;see Gallegos et al, 2024, for a review), while Johnson et al (2022) wittingly suggested that GPT-3 "has an American accent." That there is an innate bias in the models seems to be confirmed by numerous studies that investigated the political bias of models (Motoki et al, 2024).…”
Section: Can Llms Imitate (At Least Some) Humans?mentioning
confidence: 95%
“…Shortly after the release of ChatGPT, it was documented that its answers to political orientation tests tended to be diagnosed by those tests as manifesting left-leaning political preferences [4], [5], [6]. Subsequent work also examined the political biases of other language models (LM) on the Political Compass Test [7] and reported that different models occupied a wide variety of regions in the political spectrum.…”
Section: Introductionmentioning
confidence: 99%