2023
DOI: 10.1088/1361-6552/acc299
|View full text |Cite
|
Sign up to set email alerts
|

ChatGPT and the frustrated Socrates

Abstract: We present a case study of a conversation between ourselves and an artificial intelligence-based chatbot ChatGPT. We asked the chatbot to respond to a basic physics question that will be familiar to most physics teachers: ‘A teddy bear is thrown into the air. What is its acceleration in the highest point?’ The chatbot’s responses, while linguistically quite advanced, were unreliable in their correctness and often full of contradictions. We then attempted to engage in Socratic dialogue with the chatbot to resol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
3
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 49 publications
(43 citation statements)
references
References 9 publications
0
39
3
1
Order By: Relevance
“…The analysis, however, points to a contradiction in the literature. Contrary to some promising attempts (e.g., Ganguli et al, 2023) that argue language models can engage in self-correction, other recent studies (e.g., Gregorcic & Pendrill, 2023) do not confirm such attempts. Gregorcic and Pendrill (2023) engaged in a Socratic dialogue with ChatGPT to fix the errors and contradictions in ChatGPT's responses to their question.…”
Section: Discussioncontrasting
confidence: 57%
See 2 more Smart Citations
“…The analysis, however, points to a contradiction in the literature. Contrary to some promising attempts (e.g., Ganguli et al, 2023) that argue language models can engage in self-correction, other recent studies (e.g., Gregorcic & Pendrill, 2023) do not confirm such attempts. Gregorcic and Pendrill (2023) engaged in a Socratic dialogue with ChatGPT to fix the errors and contradictions in ChatGPT's responses to their question.…”
Section: Discussioncontrasting
confidence: 57%
“…They will likely provide an acceptable explanation and relevant information in response to a prompt (Farrokhnia et al, 2023;Huh, 2023). For instance, ChatGPT produced advanced responses linguistically for physics questions but were unreliable and contradictory (Gregorcic & Pendrill, 2023). It assumed a single truth without evidence (Cooper, 2023).…”
Section: R Esultsmentioning
confidence: 99%
See 1 more Smart Citation
“…On the one hand, items created by ChatGPT may initiate classroom discussions about correct solutions. From time to time, ChatGPT will produce items (or answers to these items) that are not (entirely) correct from a scientific point of view (part b in Figure 3)-for example, Gregorcic and Pendrill (2023) have found "ChatGPT to be a reliable source of problematic and incorrect answers to conceptual physics questions" (p. 8). Such invalid questions produced by ChatGPT can be used in classroom to reflect on their scientific correctness.…”
Section: Development Of a Conceptual Survey Using Chatgpt For Use In ...mentioning
confidence: 99%
“…Floridi and Chiriatti (2020) conclude that "humanity will need to be even more intelligent and critical" (p. 692). In the same direction, Kasneci et al (2023) suggest integrating large language models into classroom practice "in a way that complements and enhances the learning experience" (p. 7), e.g., helping students' developing critical thinking skills (Gregorcic & Pendrill, 2023). In any case, the "physics education community is well-positioned to investigate the use and capabilities of ChatGPT and other AI systems" (Wang, 2023).…”
Section: Introductionmentioning
confidence: 99%