Aim/Purpose: To encourage students’ engagement in peer assessments and provide students with better-quality feedback, this paper describes a technique for author-reviewer matching in peer assessment systems – a Balanced Allocation algorithm.
Background: Peer assessment concerns evaluating the work of colleagues and providing feedback on their work. This process is widely applied as a learning method to involve students in the progress of their learning. However, as students have different ability levels, the efficacy of the peer feedback differs from case to case. Thus, peer assessment may not provide satisfactory results for students. In order to mitigate this issue, this paper explains and evaluates an algorithm that matches the author to a set of reviewers. The technique matches authors and reviewers based on how difficult the authors perceived the assignment to be, and the algorithm then matches the selected author to a group of reviewers who may meet the author’s needs in regard to the selected assignment.
Methodology: This study used the Multiple Criteria Decision-Making methodology (MCDM) to determine a set of reviewers from among the many available options. The weighted sum method was used because the data that have been collected in user profiles are expressed in the same unit. This study produced an experimental result, examining the algorithm with a real collected dataset and mock-up dataset. In total, there were 240 students in the real dataset, and it contained self-assessment scores, peer scores, and instructor scores for the same assignment. The mock-up dataset created 1000 records for self-assessment scores. The algorithm was evaluated using focus group discussions with 29 programming students and interviews with seven programming instructors.
Contribution: This paper contributes to the field in the following two ways. First, an algorithm using a MCDM methodology was proposed to match authors and reviewers in order to facilitate the peer assessment process. In addition, the algorithm used self-assessment as an initial data source to match users, rather than randomly creating reviewer – author pairs.
Findings: The findings show the accurate results of the algorithm in matching three reviewers for each author. Furthermore, the algorithm was evaluated based on students’ and instructors’ perspectives. The results are very promising, as they depict a high level of satisfaction for the Balanced Allocation algorithm.
Recommendations for Practitioners: We recommend instructors to consider using the Balanced Allocation algorithm to match students in peer assessments, and consequently to benefit from personalizing peer assessment based on students' needs.
Recommendation for Researchers: Several MCDM methods could be expanded upon, such as the analytic hierarchy process (AHP) if different attributes are collected, or the artificial neural network (ANN) if fuzzy data is available in the user profile. Each method is suitable for special cases depending on the data available for decision-making.
Impact on Society: Suitable pairing in peer assessment would increase the credibility of the peer assessment process and encourage students’ engagement in peer assessments.
Future Research: The Balanced Allocation algorithm could be applied using a single group, and a peer assessment with random matching with another group may also be conducted, followed by performing a t-test to determine the impact of matching on students’ performances in the peer assessment activity.