Peer review is a cornerstone of research evaluation, yet its credibility depends critically on the expertise of those conducting it, a condition too often assumed rather than verified. This study investigates the actual expertise of reviewers involved in Italy’s national research assessment exercise (VQR 2020(VQR-2024)). We analyze the disciplinary alignment between reviewers and submissions, as well as reviewers’ scientific performance, measured via a normalized ten-year citation impact indicator. Results reveal considerable variability in reviewer expertise and workload across fields, with some disciplinary sectors markedly underserved. Nearly half of the reviewers belong to the top 30% of national research performance, but a significant minority score below the median. Reviewers appointed by the evaluation agency (ANVUR) tend to outperform those selected by lottery, though both groups include cases of inadequate expertise. These findings challenge the assumption that peer reviewers in national evaluations are consistently qualified and call for more rigorous, transparent, and performance-informed selection processes to safeguard the integrity and credibility of peer review at scale.

Peer review research assessment: are the reviewers really experts?

Giovanni Abramo;
2025-01-01

Abstract

Peer review is a cornerstone of research evaluation, yet its credibility depends critically on the expertise of those conducting it, a condition too often assumed rather than verified. This study investigates the actual expertise of reviewers involved in Italy’s national research assessment exercise (VQR 2020(VQR-2024)). We analyze the disciplinary alignment between reviewers and submissions, as well as reviewers’ scientific performance, measured via a normalized ten-year citation impact indicator. Results reveal considerable variability in reviewer expertise and workload across fields, with some disciplinary sectors markedly underserved. Nearly half of the reviewers belong to the top 30% of national research performance, but a significant minority score below the median. Reviewers appointed by the evaluation agency (ANVUR) tend to outperform those selected by lottery, though both groups include cases of inadequate expertise. These findings challenge the assumption that peer reviewers in national evaluations are consistently qualified and call for more rigorous, transparent, and performance-informed selection processes to safeguard the integrity and credibility of peer review at scale.
2025
bibliometrics
peer review
research evaluation
research performance
VQR
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12606/37965
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
social impact