Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 2 2022
DOI: 10.1145/3501709.3544280
|View full text |Cite
|
Sign up to set email alerts
|

Generating Diverse Code Explanations using the GPT-3 Large Language Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0
3

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 120 publications
(48 citation statements)
references
References 13 publications
0
45
0
3
Order By: Relevance
“…Language Model: A language model is a type of artificial intelligence model is trained to generate text that is similar to human language (MacNeil et al, 2022).…”
Section: Some Key Concepts Related To Chatgptmentioning
confidence: 99%
“…Language Model: A language model is a type of artificial intelligence model is trained to generate text that is similar to human language (MacNeil et al, 2022).…”
Section: Some Key Concepts Related To Chatgptmentioning
confidence: 99%
“…In computing education, a recent work by MacNeil et al [19] has employed GPT-3 to generate code explanations. Despite several open research and pedagogical questions that need to be further explored, this work has successfully demonstrated the potential of GPT-3 to support learning by explaining aspects of a given code snippet.…”
Section: Review Of Research Applying Large Languagementioning
confidence: 99%
“…This was sufficient for our study: an explanation failure occurred only 13 times out of 159 queries in the grounded condition, on average 1.08 times per user. We do not claim that our algorithm is the most effective, and future work could explore alternative ways of producing grounded utterances, such as leveraging the LLM itself [45,72]. Our system is limited in assuming a single well-defined relational data table.…”
Section: Limitationsmentioning
confidence: 97%