An Algorithm for Peer Review Matching in Massive Courses for Minimising Students' Frustration
Iria Estévez-Ayres (Universidad Carlos III de Madrid, Spain)
Raquel M. Crespo-García (Universidad Carlos III de Madrid, Spain)
Jesús A. Fisteus (Universidad Carlos III de Madrid, Spain)
Carlos Delgado Kloos (Universidad Carlos III de Madrid, Spain)
Abstract: Traditional pedagogical approaches are no longer sufficient to cope with the increasing challenges of Massive Open On-line Courses (MOOCs). Consequently, it is necessary to explore new paradigms. This paper describes an exploration of the adaptation of the peer review methodology for its application to MOOCs. Its main goal is to minimise the students' frustration through the reduction of the number of committed students that receive no feedback from their peers. In order to achieve this objective, we propose two algorithms for the peer review matching in MOOCs. Both reward committed students by prioritising the review of their submissions. The first algorithm uses sliding deadlines to minimise the probability of a submission not being reviewed. Our experiments show that it reduces dramatically the number of submissions from committed students that do not receive any review. The second algorithm is a simplification of the former. It is easier to implement and, despite performing worse than the first one, it also improves with respect to the baseline.
Keywords: Massive Open Online Courses, assessment, evaluation, peer assessment, peer evaluation, peer review, quality
Categories: K.3, K.3.1