2013
DOI: 10.4018/jwltt.2013040103
|View full text |Cite
|
Sign up to set email alerts
|

Determining the Consistency of Student Grading in a Hybrid Business Course using a LMS and Statistical Software

Abstract: Extant literature asserted that peer assessments improved learning but only a few studies had addressed the student rating consistency issue and not had evaluated this within a Learning Management System (LMS). The researcher explored how to conduct peer assessments in Moodle LMS for a hybrid-mode business course (N=90 students) which required 270 (25-page) reports. Rater agreement statistical theory was applied to test the consistency of student peer assessments. The resulting coefficient was 0.79 and statist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 38 publications
1
2
0
Order By: Relevance
“…The results of this study show no significant difference in group grades based on peer group grading or instructor grading. These results for peer group grading extend the research by Strang (2013), where peer grades and instructor grades in a hybrid course were not significantly different for individual student submissions. These results also build on a study by Avery (2014), that found highly correlated instructor and peer-to-peer assessment of class participation.…”
Section: Discussionsupporting
confidence: 85%
See 1 more Smart Citation
“…The results of this study show no significant difference in group grades based on peer group grading or instructor grading. These results for peer group grading extend the research by Strang (2013), where peer grades and instructor grades in a hybrid course were not significantly different for individual student submissions. These results also build on a study by Avery (2014), that found highly correlated instructor and peer-to-peer assessment of class participation.…”
Section: Discussionsupporting
confidence: 85%
“…Kulkarni et al (2013) found that providing students with well-specified dimensions in rubrics for peer assessment enhances agreement between student generated scores and instructor feedback. This approach was shown to be effective in providing consistency in student peer and instructor grading at the individual level (Strang, 2013). Applying the structured approach to peer group grading overcomes that variability in academic preparedness and also reduces the instructor necessity to average, or in some way tabulate individual scores for each peer group.…”
Section: Peer Group Gradingmentioning
confidence: 99%
“…The major concern in peer assessment is its validity as well as reliability (Cho, Schunn, & Wilson, 2006). Topping (1998) found disagreement on the degree of validity and reliability of peer assessment on his review, some studies report high validity and reliability (Haaga, 1993;Stefani, 1994;Strang, 2013), and the others report otherwise (Cheng & Warren, 1999;Mowl & Pain, 1995). However, the issues regarding validity and reliability can be reduced by providing the students with assessment rubrics (Hafner & Hafner, 2003;Jonsson & Svingby, 2007) since it makes expectations and criteria explicit.…”
Section: Technology-enhanced Peer Assessmentmentioning
confidence: 99%