Massive open on-line courses (MOOCs) are effective and flexible resources to educate, train, andempower populations. Peer assessment (PA) provides a powerful pedagogical strategy to supporteducational activities and foster learners’ success, also where a huge number of learners is involved.Item response theory (IRT) can model students’ features, such as the skill to accomplish a task, andthe capability to mark tasks. In this paper the authors investigate the applicability of IRT models toPA, in the learning environments of MOOCs. The main goal is to evaluate the relationships betweensome students’ IRT parameters (ability, strictness) and some PA parameters (number of graders pertask, and rating scale). The authors use a data-set simulating a large class (1,000 peers), built by aGaussian distribution of the students’ skill, to accomplish a task. The IRT analysis of the PA dataallow to say that the best estimate for peers’ ability is when 15 raters per task are used, with a [1,10]rating scale.

An Item Response Theory Approach to Enhance Peer Assessment Effectiveness in Massive Open Online Courses

SCIARRONE F
Membro del Collaboration Group
;
2022-01-01

Abstract

Massive open on-line courses (MOOCs) are effective and flexible resources to educate, train, andempower populations. Peer assessment (PA) provides a powerful pedagogical strategy to supporteducational activities and foster learners’ success, also where a huge number of learners is involved.Item response theory (IRT) can model students’ features, such as the skill to accomplish a task, andthe capability to mark tasks. In this paper the authors investigate the applicability of IRT models toPA, in the learning environments of MOOCs. The main goal is to evaluate the relationships betweensome students’ IRT parameters (ability, strictness) and some PA parameters (number of graders pertask, and rating scale). The authors use a data-set simulating a large class (1,000 peers), built by aGaussian distribution of the students’ skill, to accomplish a task. The IRT analysis of the PA dataallow to say that the best estimate for peers’ ability is when 15 raters per task are used, with a [1,10]rating scale.
2022
Grading Scale, Gaussian distribution, Item Response Theory, Latent Ability, Peer Assessment, Pearson Correlation, Rating Peers, Simulation, Strictness
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12606/4601
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
social impact