2022
DOI: 10.1111/bjet.13270
|View full text |Cite
|
Sign up to set email alerts
|

Beyond item analysis: Connecting student behaviour and performance using e‐assessment logs

Abstract: Traditional item analyses such as classical test theory (CTT) use exam‐taker responses to assessment items to approximate their difficulty and discrimination. The increased adoption by educational institutions of electronic assessment platforms (EAPs) provides new avenues for assessment analytics by capturing detailed logs of an exam‐taker's journey through their exam. This paper explores how logs created by EAPs can be employed alongside exam‐taker responses and CTT to gain deeper insights into exam items. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 71 publications
0
2
0
Order By: Relevance
“…Additionally, it was shown that MCQ items with a higher character count tend to have a poorer discrimination index, but that the character count has no effect on difficulty. These effects have been described in other studies that also concluded that examinees require more time for poorly discriminating items ( 30 , 46 ) as well as for difficult items ( 46 , 47 ). As a result, to improve discrimination and reduce required response time, items should be kept as clear and concise as possible, which aligns with the formal requirements of multiple-choice items in the literature ( 12 , 28 , 48–52 ).…”
Section: Discussionsupporting
confidence: 72%
“…Additionally, it was shown that MCQ items with a higher character count tend to have a poorer discrimination index, but that the character count has no effect on difficulty. These effects have been described in other studies that also concluded that examinees require more time for poorly discriminating items ( 30 , 46 ) as well as for difficult items ( 46 , 47 ). As a result, to improve discrimination and reduce required response time, items should be kept as clear and concise as possible, which aligns with the formal requirements of multiple-choice items in the literature ( 12 , 28 , 48–52 ).…”
Section: Discussionsupporting
confidence: 72%
“…From a broader perspective, there is a strong match between the critical factor mentioned in the literature and the top three most priority factors raised by LA experts in previous work (see Chevreux et al, 2020 and Figure 3). In this vein, recently published studies such as Cukurova et al (2023) and Lahza et al (2023) also take into account the aforementioned factors, but from a teacher and technical perspective and not from the institutional point of view. For instance, these studies mention teachers' workload, trust and ownership by teachers, guidance, professional development, support, classroom orchestration, privacy concerns, technical infrastructure, data collection protocols and type of data analysis, all of them match with our 14 factors.…”
Section: Discussionmentioning
confidence: 99%
“…The authors tested the model using student trace‐data and assessment data collected in an LMS site of an authentic mathematics course. Lahza et al (2022) examined exam‐taker behaviours from exam logs extracted from electronic assessment platforms, thus adding to traditional methods in item analysis. The authors demonstrated that this approach can be used to determine the effectiveness of test items and identify the items that need to be revised to improve the overall reliability of an exam.…”
Section: Brief Overview Of Contributionsmentioning
confidence: 99%