What Interventions Improve the Effectiveness of Technology-Facilitated Peer Assessment? A Meta-Analysis

Kistantia Elok Mumpuni, Samsul Hadi, Slamet Suyanto
Journal of Information Technology Education: Innovations in Practice  •  Volume 24  •  2025  •  pp. 016

This meta-analysis aims to examine the effect size of various interventions in technology-facilitated peer assessment on learning performance. Furthermore, it pursues to identify and recommend effective peer assessment interventions that highlight opportunities for the enhancing peer assessment platforms and support the strategic implementation of interventions to optimize learning outcomes.

Peer assessment is a pivotal pedagogical tool, fostering students’ critical judgment and self-assessment skills and providing educators with a valuable understanding of individual progress. However, concerns among teachers and students persist regarding the effectiveness of peer assessment. These challenges can be addressed through intervention settings and technological facilities.

This meta-analysis examines how different intervention settings influence learning outcomes in technology-facilitated peer assessment. Using the PRISMA framework, 24 eligible studies comprising 79 data sets were systematically identified based on predefined inclusion and exclusion criteria. Extracted data were organized into four principal categories: (1) participant characteristics, (2) intervention moderators, (3) research outcomes, and (4) study records. A random-effects model was employed to synthesize the findings. All analyses were conducted in R, primarily using the {meta} package, with additional support from {dmetar} and {metafor} for outlier detection and visualization.

This study offers a synthesized overview of current research on technology-facilitated peer assessment, contributing to a clearer understanding of its setting and implementation. By identifying relevant trends and effective intervention areas, the findings provide useful guidance for researchers and educators in developing more informed and context-appropriate peer assessment strategies. The review also highlights how technology-facilitated peer assessment can be used for student learning and teacher facilitation through more efficient monitoring and feedback delivery.

The analysis revealed an effect size of 0.31 [95% CI: 0.230–0.413], confirming the positive impact of well-designed interventions in technology-facilitated peer assessment compared to alternative or minimal intervention. These interventions enhance key elements such as rubric comprehension, feedback quality, evaluative autonomy, and self-reflection, leading to improved learning outcomes. Effective settings prioritize process-oriented strategies, reciprocal roles, individually allocated feedback, and input from a single peer, while allowing for adaptation based on the characteristics of students, instructors, school contexts, and instructional content.

Teachers should first determine whether the peer assessment targets written work, products, or performance, ensuring alignment with learning objectives. Subsequently, the setting of the PA intervention should include clear specifications regarding the type of feedback, assessment procedures, and the roles of participants. Providing structured training for both assessors and assessees is essential to enhance feedback quality. Additional factors warranting consideration include the level of anonymity in the assessment process, the duration of the assessment period, student characteristics, and the potential for friendship bias.

While peer assessment has demonstrated validity and reliability across various educational contexts, researchers should continue to explore the conditions under which it most effectively supports learning. Future studies should investigate strategies to minimize potential social challenges, such as peer conflict or bias, through careful group formation and facilitation. The selection of technology should prioritize usability and minimize additional burden for both teachers and students. Additionally, research should consider the impact of voluntary versus mandatory participation, as fostering intrinsic motivation may enhance the quality and acceptance of peer assessment practices.

The findings underscore that, while peer assessment holds potential for enhancing learning, many students initially struggle to provide meaningful feedback, and teachers may question the credibility of peer-generated evaluations. These challenges point to a broader need for structured interventions, such as training, scaffolding with rubrics, and thoughtful feedback design, to build students’ confidence and competence in assessing peers. For educators and policymakers, this suggests that investing in supportive structures and technologies for peer assessment can lead to more engaging and effective classroom practices.

Future research should focus on developing peer assessment platforms to meet the needs of educational contexts. Beyond refining the intervention setting, further studies should explore the integration of peer and teacher feedback for more robust empirical findings. Expanding research across diverse subjects and educational levels is also essential.

peer assessment, intervention, technology-facilitated, learning performance, meta-analysis
72 total downloads
Share this
 Back

Back to Top ↑