2023
DOI: 10.1126/sciadv.adh1850
|View full text |Cite
|
Sign up to set email alerts
|

AI model GPT-3 (dis)informs us better than humans

Abstract: Artificial intelligence (AI) is changing the way we create and evaluate information, and this is happening during an infodemic, which has been having marked effects on global health. Here, we evaluate whether recruited individuals can distinguish disinformation from accurate information, structured in the form of tweets, and determine whether a tweet is organic or synthetic, i.e., whether it has been written by a Twitter user or by the AI model GPT-3. The results of our preregistered study, including 697 parti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 70 publications
(15 citation statements)
references
References 25 publications
(37 reference statements)
0
15
0
Order By: Relevance
“…Previous research reported a potential for OpenAI’s GPT platforms to facilitate the generation of health disinformation on topics such as vaccines, antibiotics, electronic cigarettes, and homeopathy treatments. 6 8 9 12 In our study we found that most of the prominent, publicly accessible LLMs, including GPT-4 (via ChatGPT and Copilot), PaLM 2 and Gemini Pro (via Bard), and Llama 2 (via HuggingChat), lack effective safeguards to consistently prevent the mass generation of health disinformation across a broad range of topics. These findings show the capacity of these LLMs to generate highly persuasive health disinformation crafted with attention grabbing titles, authentic looking references, fabricated testimonials from both patients and doctors, and content tailored to resonate with a diverse range of demographic groups.…”
Section: Discussionmentioning
confidence: 71%
See 2 more Smart Citations
“…Previous research reported a potential for OpenAI’s GPT platforms to facilitate the generation of health disinformation on topics such as vaccines, antibiotics, electronic cigarettes, and homeopathy treatments. 6 8 9 12 In our study we found that most of the prominent, publicly accessible LLMs, including GPT-4 (via ChatGPT and Copilot), PaLM 2 and Gemini Pro (via Bard), and Llama 2 (via HuggingChat), lack effective safeguards to consistently prevent the mass generation of health disinformation across a broad range of topics. These findings show the capacity of these LLMs to generate highly persuasive health disinformation crafted with attention grabbing titles, authentic looking references, fabricated testimonials from both patients and doctors, and content tailored to resonate with a diverse range of demographic groups.…”
Section: Discussionmentioning
confidence: 71%
“…Previous research reported a potential for OpenAI’s GPT platforms to facilitate the generation of health disinformation on topics such as vaccines, antibiotics, electronic cigarettes, and homeopathy treatments 68912. In our study we found that most of the prominent, publicly accessible LLMs, including GPT-4 (via ChatGPT and Copilot), PaLM 2 and Gemini Pro (via Bard), and Llama 2 (via HuggingChat), lack effective safeguards to consistently prevent the mass generation of health disinformation across a broad range of topics.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The rapid adoption of generative AI technologies poses exceptional benefits as well as risks. Current research demonstrates that humans, when assisted by generative AI, can significantly increase productivity in coding ( 13 ), ideation ( 14 ), and written assignments ( 15 ) while raising concerns regarding potential disinformation ( 16 ) and stagnation of knowledge creation ( 17 ). Our research is focused on how generative AI is impacting and potentially coevolving with human creative workflows.…”
Section: Discussionmentioning
confidence: 99%
“…Pre-trained models have achieved great success in the field of natural language processing (NLP) [ 21 ]. For example, bidirectional encoder representations from transformers (BERT) [ 22 ], GPT-3 [ 23 ] and other models have reached the state-of-the-art performance in various NLP tasks. Similarly, DNA or RNA sequences can also be considered as a kind of language, with certain grammatical and semantic rules.…”
Section: Introductionmentioning
confidence: 99%