Peer Review Fatigue

Peer Review Network – Publons

Scholarly peer review is the process of subjecting an author’s scholarly work, research, or ideas to the scrutiny of others who are experts in the same field. Peer reviewers (with research journal’s editors) are today’s gatekeepers of the research literature used to document and communicate human discovery. To do so, an estimate of 68.5 million hours are invested reviewing globally every year. Is it scalable? (is it even sustainable?)

This is one of the questions addressed by the first Global State of Peer Review report by Publons, a company founded in 2012 “to address the static state of peer-reviewing practices in scholarly communication, with a view to encourage collaboration and speed up scientific development.” These are Publons’ estimates for year 2016:

  • Total publications in 2016: 2.9 million
  • Global manuscript acceptance rate: 55%
  • Total rejections: 2.5 million
  • Reviews per publication: 3 (two reviews in first round and one in second round)
  • Reviews per rejection: 2 (two reviews in first round)
  • Estimated number of reviews each year: 13.7 million
  • 16.4 days is the median review time
  • 5 hours is the median time spent writing each review
  • 477 words is the average length of review reports
  • 10% of reviewers are responsible for 50% of peer reviews

Figure above intends to convey the dimensions and interrelationships in the peer review landscape.

The first referee systems set in place by English scientific societies in the early nineteenth century never intended to play the supreme role they play today. That notion emerged later, around 1900, at the very same moment some began to wonder whether referee systems might be fundamentally flawed:

As this idea gained ground, many began to worry that the system itself might be intrinsically flawed, a force that impeded creative science and which ought to be abolished.

What’s curious is that referee systems have survived, and peer review does happen at all, despite comparatively weak incentives and recognition:

(….) researchers continue to regard peer review as a critical tool for ensuring the quality and integrity of the literature.

Researchers see peer reviewing as part of their job, something they should reciprocate, and as a necessary part of contributing to the integrity of the published literature. They are also quite aware that it is a valuable way to stay up-to-date with research trends in their field.

(No surprise the Anonymous Peer Reviewer has finally a (crowd-funded) monument in Moscow.)

In any case, there is a widespread beliefs that the current model is sub-optimal. An increasing number of reviewers feel that unpaid peer review is rather unfair―the global costs of reviewers’ time were estimated at £1.9bn in 2008―and Publons finds a growing “Reviewer Fatigue”―which adds to “Attention Decay”. Journals find it increasingly difficult to get their articles peer-reviewed and editors have to invite more reviewers to get each review done. (From 1.9 invitations in 2013 to 2.4 in 2017).

China might come to the rescue of a stressed system. Researchers from emerging regions are under-represented in the peer review process. Right now, scientists in developed countries provide nearly three times as many peer reviews per paper submitted as researchers in emerging nations. Scientists in emerging nations are keen to do peer review, but do not receive as many requests as their colleagues. China ranks now second, but it growth potential is huge:

China surpassed the UK in review output in 2015, and continues to grow rapidly, but still lags behind most established regions in terms of reviews per submission. In 2017, China produced 0.8 reviews per submission compared with an average of 2.3 reviews per submission for all established regions.

assuming that existing trends continue, it will take until 2031 until China is reviewing, on average, as much per submission as the group of established regions. China is, however, projected to reach reviewing parity with the United States, in absolute terms, in 2024.

Meanwhile, the demand that publicly funded research should be provided free on open-access journals, and open peer review add pressure (hopefully for good) to the dominant scientific publishing model.

Publons conclude that projecting the future state of peer review is fraught with uncertainty. Researchers, however, have a clear idea of what will make a difference: greater recognition and more formalised incentives for peer review.

It’s a bit of a surprise that Publons’ report does not even mention a possible role for artificial intelligence in the future of peer review. I think it may be an area of application more amenable to human-machine collaboration than many other much more hyped, where it is not necessary an all-or-nothing approach, and where we could perhaps learn some new tricks to improve other critical “fatigued” institutions.

In these days of fake-everything hysteria, one can only imagine what would happen if a Facebook-for-scientists (there are quite a few candidates) with a little help of predatory publishers and informational autocrats, became dominant.

____________________

3 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.