2023
DOI: 10.1007/978-3-031-35894-4_2
|View full text |Cite
|
Sign up to set email alerts
|

AI Unreliable Answers: A Case Study on ChatGPT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Our results indicated that faculty demonstrated a higher level of knowledge than students. Yet, more than 40% of surveyed students and faculty expressed unwavering trust in the reliability of ChatGPT's responses, a perception that does not align with reality given that most AI-driven conversational models are prone to errors and biases (e.g., Amaro et al, 2023;Ray, 2023). There are two direct implications for this finding.…”
Section: Knowledgementioning
confidence: 99%
“…Our results indicated that faculty demonstrated a higher level of knowledge than students. Yet, more than 40% of surveyed students and faculty expressed unwavering trust in the reliability of ChatGPT's responses, a perception that does not align with reality given that most AI-driven conversational models are prone to errors and biases (e.g., Amaro et al, 2023;Ray, 2023). There are two direct implications for this finding.…”
Section: Knowledgementioning
confidence: 99%
“…classes can often bear some resemblance to each other. However, ChatGPT's responses are sometimes unreliable, with errors detected as such intelligent answering machines can produce reasonable results but are incorrect or illogical (Amaro et al, 2023;Fitria, 2023). Therefore, it is vital to explore whether the students are aware of this AI technology's limitations and use it with caution.…”
Section: Introductionmentioning
confidence: 99%