Journal Impact Factor and Peer Review Thoroughness and Helpfulness: A Supervised Machine Learning Study

by   Anna Severin, et al.

The journal impact factor (JIF) is often equated with journal quality and the quality of the peer review of the papers submitted to the journal. We examined the association between the content of peer review and JIF by analysing 10,000 peer review reports submitted to 1,644 medical and life sciences journals. Two researchers hand-coded a random sample of 2,000 sentences. We then trained machine learning models to classify all 187,240 sentences as contributing or not contributing to content categories. We examined the association between ten groups of journals defined by JIF deciles and the content of peer reviews using linear mixed-effects models, adjusting for the length of the review. The JIF ranged from 0.21 to 74.70. The length of peer reviews increased from the lowest (median number of words 185) to the JIF group (387 words). The proportion of sentences allocated to different content categories varied widely, even within JIF groups. For thoroughness, sentences on 'Materials and Methods' were more common in the highest JIF journals than in the lowest JIF group (difference of 7.8 percentage points; 95 Reporting' went in the opposite direction, with the highest JIF journals giving less emphasis to such content (difference -8.9 helpfulness, reviews for higher JIF journals devoted less attention to 'Suggestion and Solution' and provided fewer Examples than lower impact factor journals. No, or only small differences were evident for other content categories. In conclusion, peer review in journals with higher JIF tends to be more thorough in discussing the methods used but less helpful in terms of suggesting solutions and providing examples. Differences were modest and variability high, indicating that the JIF is a bad predictor for the quality of peer review of an individual manuscript.


Do bibliometrics introduce gender, institutional or interdisciplinary biases into research evaluations?

Systematic evaluations of publicly funded research typically employ a co...

Do conspicuous manuscripts experience shorter time in the duration of peer review?

A question often asked by authors is how long would it take for the peer...

COMPARE: A Taxonomy and Dataset of Comparison Discussions in Peer Reviews

Comparing research papers is a conventional method to demonstrate progre...

GPT4 is Slightly Helpful for Peer-Review Assistance: A Pilot Study

In this pilot study, we investigate the use of GPT4 to assist in the pee...

PRINCIPIA: a Decentralized Peer-Review Ecosystem

Peer review is a cornerstone of modern scientific endeavor. However, the...

Possibility and prevention of inappropriate data manipulation in Polar Data Journal

Stakeholders in the scientific field must always maintain transparency i...

Inducing Honest Reporting Without Observing Outcomes: An Application to the Peer-Review Process

When eliciting opinions from a group of experts, traditional devices use...

Please sign up or login with your details

Forgot password? Click here to reset