Jacob Bishop holds B.S. and M.S. degrees in Mechanical Engineering. He is currently a graduate student at Utah State University pursuing a Ph.D. in Engineering Education. His research interests are multidisciplinary. In educational research, his interests include model-eliciting activities, open online education, educational data mining, and the flipped classroom. In quantitative methodology and psychometrics, his interests focus on the use of latent variable models to analyze variability and change over time.
BACKGROUNDPeer review is a beneficial pedagogical tool. Despite the abundance of data instructors often have about their students, most peer review matching is by simple random assignment. In fall 2008, a study was conducted to investigate the impact of an informed algorithmic assignment method, called Un-weighted Overall Need (UON), in a course involving Model-Eliciting Activities (MEAs). The algorithm showed no statistically significant impact on the MEA Final Response scores. A study was then conducted to examine the assumptions underlying the algorithm.
PURPOSE (HYPOTHESIS)This research addressed the question: To what extent do the assumptions used in making informed peer review matches (using the Un-weighted Overall Need algorithim) for the peer review of solutions to Model-Eliciting Activities decay?
DESIGN/METHODAn expert rater evaluated the solutions of 147 teams' responses to a particular implementation of MEAs in a first-year engineering course at a large mid-west research university. The evaluation was then used to analyze the UON algorithm's assumptions when compared to a randomly assigned control group.
RESULTSWeak correlation was found in the five UON algorithm's assumptions: students complete assigned work, teaching assistants can grade MEAs accurately, accurate feedback in peer review is perceived by the reviewed team as being more helpful than inaccurate feedback, teaching assistant scores on the first draft of an MEA can be used to accurately predict where teams will need assistance on their second draft, and the error a peer review has in evaluating a sample MEA solution is an accurate indicator of the error they will have while subsequently evaluating a real team's MEA solution.
CONCLUSIONSConducting informed peer review matching requires significant alignment between evaluators and experts to minimize deviations from the algorithm's designed purpose.
is an associate professor in the School of Engineering Education with affiliations with the Women's Studies Program and Division of Environmental and Ecological Engineering at Purdue University. She has a B.Eng. in chemical engineering (with distinction) from McGill University, and an M.S. and a Ph.D. in industrial and systems engineering with a Ph.D. minor in women's studies from the University of Wisconsin-Madison. She runs the Feminist Research in Engineering Education (FREE, formerly RIFE) group, whose diverse projects and group members are described at the website http://feministengineering.org/.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.