2024
DOI: 10.3390/educsci14020148
|View full text |Cite
|
Sign up to set email alerts
|

Development and Evaluation of a Custom GPT for the Assessment of Students’ Designs in a Typography Course

Miada Almasre

Abstract: The recent advancements in the fields of AI technology, generative AI, and Large Language Models (LLMs) have increased the potential of the deployment of such tools in educational environments, especially in contexts where student assessment fairness, quality, and automation are a priority. This study introduces an AI-enhanced evaluation tool that utilizes OpenAI’s GPT-4 and the recently released custom GPT feature to evaluate the typography designs of 25 students enrolled in the Visual Media diploma offered b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…Unlike previous research by García-Orosa, Canavilhas and Vázquez-Herrero (2023) and Yang ( 2022), which emphasized the potential and challenges of integrating AI in education without a clear framework for evaluation, this study provides a concrete rubric for assessing the effectiveness of AI chatbots in educational settings. The rubric's focus on "Accuracy," "Relevance," and "Efficiency" parallels the attributes identified by Almasre (2024) and Cope, Kalantzis, and Searsmith (2021) as essential for educational tools. However, our findings extend beyond these attributes by validating a comprehensive set of criteria through empirical methods, addressing a gap in the literature regarding the systematic assessment of AI chatbots.…”
Section: Discussionmentioning
confidence: 88%
See 1 more Smart Citation
“…Unlike previous research by García-Orosa, Canavilhas and Vázquez-Herrero (2023) and Yang ( 2022), which emphasized the potential and challenges of integrating AI in education without a clear framework for evaluation, this study provides a concrete rubric for assessing the effectiveness of AI chatbots in educational settings. The rubric's focus on "Accuracy," "Relevance," and "Efficiency" parallels the attributes identified by Almasre (2024) and Cope, Kalantzis, and Searsmith (2021) as essential for educational tools. However, our findings extend beyond these attributes by validating a comprehensive set of criteria through empirical methods, addressing a gap in the literature regarding the systematic assessment of AI chatbots.…”
Section: Discussionmentioning
confidence: 88%
“…These results go beyond previous reports. For instance, the validation of the rubric, underscored by the unanimous acceptance of criteria such as "Accuracy," "Relevance," and "Efficiency," echoes the critical attributes highlighted in the literature for evaluating educational tools (Almasre, 2024;Cope, Kalantzis, and Searsmith, 2021). These attributes are important to ensure that AI chatbots effectively aid pedagogy, fostering both knowledge acquisition and the development of critical thinking skills among students.…”
Section: Discussionmentioning
confidence: 99%