Journalology #14: Rewarding reviewers


Hello fellow journalologists,

This time of year is always somewhat frantic, so thank you for taking the time to keep up-to-date with editorial developments in journal publishing. This week I’ve focused on a topic that’s been discussed repeatedly over the years: whether peer reviewers should be paid. At the end of the newsletter, in the Briefly Quoted section, you'll hear a fabulous story about error bars — I laughed out loud when I read that one.

If you missed the November issue of The Brief, you can read it here. The Clarke & Esposito team covered:

I wrote the first draft for two of the four stories (the final text is very much a C&E team effort). How publishers are covering the SDGs is a favourite topic of mine, so that’s one. Can you guess the other? Answers on a postcard please.

Rewarding reviewers

Finding appropriate peer reviewers has always been challenging, but it has become especially difficult over the past few years. A feature published last week in the Chronicle of Higher Education asks: Is It Time to Pay Peer Reviewers? (paywall).

The publish or perish culture means that increasingly scholars are unwilling to spend precious hours assessing the work of others. Since each paper requires 2 or 3 peer review reports, scholars need, on average, to peer review 2-3 papers for every paper they submit.

There are also geographic imbalances (described in the 2018 Publons report), with researchers from the USA doing a higher proportion of peer review than their counterparts elsewhere. Partly that’s due to language barriers, but there is also likely to be some element of discrimination by editors working in richer nations towards researchers working in less well funded environments.

But perhaps the biggest change in recent years has been the pressure publishers feel to find peer reviewers quickly; authors are increasingly choosing to submit to journals with fast turnaround times. One common way to do that is to send multiple requests for review (often more than 10 invitations) at the same time. This scatter-gun approach means that many academics are receiving tens of referee requests a week (and for some tens of requests every day), many of which are outside of their immediate research interest.

This, coupled with frustration about perceived high profit margins for commercial publishers, makes many scholars less reluctant to peer review papers than they have been historically. Journal publishing is experiencing a period of crisis. Something needs to change.

A key question, therefore, is whether academics need to be better incentivised to do peer review. One incentive could be hard cash, but there are other options that editors and publishers could consider.

One of the best arguments for paying peer reviewers is that it would allow researchers from less privileged backgrounds to contribute. As the authors of a correspondence to The Lancet put it back in April:

One of the reasons for a greater proportion of reviews being provided by researchers from high-income countries than low-income countries could be that researchers from low-income countries are not equally included in the pool, because they have little time for unpaid work. For example, many health researchers in under-funded countries do not have protected or paid time to conduct research. Having reviewers mainly from high-income countries means that the interest of these scientists and populations are perpetuated, and those in low-resource settings are marginalised.

A few years ago James Heathers started the “The 450 movement”, which advocates for a flat fee of $450 for each peer review report written. In many ways his arguments are compelling:

This is how commercial relationships are conducted. It is straightforward and ubiquitous. The result is often no more complicated or mysterious than a regular bank or wire transfer. You buy goods and services.

The hard truth, however, is that if journals paid $450 for each referee report they received then prices would need to go up. A journal with an APC of $2500 simply couldn’t afford to spend ~$1000 on peer review, even those with good profit margins.

However, some journals think that paying reviewers a more modest amount could be a unique selling point. Advances.in launched its first journal a few months ago. Academics can sign up to be editors and peer reviewers, and are paid for both functions. As a peer reviewer they get paid $100 for the first three papers that they peer review each year (i.e. $300 in total) and then $150 for every paper after that. The tally resets each calendar year. There’s a QC process to ensure that peer review reports are up to scratch. Editors get paid for every paper that they send out to review (but not for desk rejects), which incentivises them to be generous with their initial editorial assessment. Editors are not allowed to invite more than two reviewers. The APC is $1950.

Meanwhile, PeerJ has been experimenting with a token-based reward mechanism, and announced the initial results a few weeks ago.

It has now been just over nine months since we launched PeerJ Tokens, part of our Contributor Rewards Program. Tokens are earned by PeerJ reviewers and editors, and can be exchanged for discounts on our APC. They are stackable over time and multiple authors can apply them to a submission, maximizing the discount available. You can watch an explainer here.

Over the past 9 months, 7000 Peer J contributors have earned close to 90,000 tokens. Each token is worth a nominal $10, so there are $900k worth of tokens potentially available for APC discounts (The key word in that sentence is “potentially”. There’s £263 million of unused credit on Transport For London Oyster cards, apparently. Although 90,000 PeerJ tokens have been awarded, it seems likely that a decent proportion of those tokens will never be cashed in.)

Peer reviewers earn 5 tokens ($50) the first time they get invited to peer review a paper (i.e. create an account and get invited) and then 10 tokens ($100) for every peer review report that they contribute subsequently. Editors get paid (a rather paltry) 5 tokens ($50) for every paper they handle. This means that there’s no financial incentive for PeerJ editors to send papers out to review, in contrast to the Advances.in model. The PeerJ (flagship journal) APC is $1395.

A token model would be much more appealing to researchers if the tokens were transferable between publishers. However, from a publishers’ perspective tokens are a way of keeping researchers ‘on platform’. PeerJ allows tokens to be given to other contributors or donated to a central pool to pay APCs for authors who do not have the funds available. The PeerJ Hubs model is designed to allow communities to create OA publications that can use the token system and is PeerJ’s play in the society publishing market.

Peer J also awards “contribution points” to editors and peer reviewers who repeatedly contribute to the PeerJ journals. These badges do not have a monetary value assigned to them and look to be a mechanism to publicly acknowledge the hard work of their most diligent contributors.

$100 per peer review is a token payment for a report that might take 5+ hours to write. It would be impossible to properly remunerate peer reviewers for their hard work without significantly increasing APCs. Publishers would need to invest in automated systems to administer and make payments to peer reviewers, as well as pay the reviewers themselves. Creating peer-review payment systems will be much easier for new entrants than for existing publishers, and could be a competitive advantage at a time when it's increasingly difficult to source peer reviewers.

Editorial wisdom

This week’s piece of editorial wisdom comes from Kimberley R. Isett, who is editor of Perspectives on Public Management and Governance and was quoted in The Chronicle of Higher Education article on paying for peer review.


Please do consider giving the gift of a Journalology newsletter subscription this Christmas to those who are nearest and dearest to you. Best of all, it's free!

https://journalology.ck.page

(The Journalology website is still in development and hopefully will see the light of day early next year. I'm now on my third CMS, but I think I’ve found a platform that has all the functionality that I need. The website will contain an archive of the newsletters, which will help me to remember what I’ve written as there’s a significant risk that I will start repeating myself.)


Briefly quoted

Paper-mill detector put to the test in push to stamp out fake science

The integrity hub’s first tool works by scanning papers for more than 70 signals that could indicate that the manuscript has been generated by a paper mill. Those involved remain tight-lipped about what those signals are, so as not to tip off fraudsters. But previous public work has suggested red flags such as formulaic article titles and layouts, bar charts with identical profiles claiming to represent data from different experiments, suspicious-looking author e-mail addresses or strange turns of phrase that could indicate the use of automated translation software.
The second tool is designed to alert editors when someone has submitted a paper to several journals at once. Paper mills use this tactic to try to get papers accepted more quickly (it is considered inappropriate to submit a full manuscript to multiple journals at once).

Nature (Holly Else)


Board members decry their own journal’s retraction of paper on predatory publishers

Many members of the journal’s Distinguished Reviewers Board agreed with Srholec and Macháček’s objections, and criticized the retraction in a letter recently published in Scientometrics. They cited our coverage of the retraction as disclosing “important information about the context of the retraction… including the pressure exerted by the publisher Frontiers on Scientometrics.”

Retraction Watch (Ellie Kincaid)


Retracted papers originating from paper mills: cross sectional study

Our cross sectional analysis of all papers retracted for originating from paper mills until June 2022, identified from the Retraction Watch database, suggests that these paper mill retractions are increasing in frequency. Nearly all authors of these papers came from China and were predominantly affiliated with hospitals. The median time for retraction of a paper mill paper was close to two years and increased with the ranking of the journal in which it was published, so that the higher the Journal Citation Reports impact factor, the shorter the period until retraction. These papers affect legitimate journals and does not seem to be exclusive to predatory journals. Furthermore, this study showed the impact and visibility of these retracted papers because some were highly cited, with the potential consequences that this entails. To our knowledge, this is the first study to analyse the growing phenomenon of paper mill retractions and their characteristics.

The BMJ (Cristina Candal-Pedreira et al)


Image altering, alleged in Stanford leader’s work, hardly rare

But a comment in the Science piece from Holden Thorp, the editor-in-chief of the journal, is a reminder that even some key players at the pinnacle of scholarly publishing seem to have slept through multiple alarms. Thorp said “2017 would have been [near] the beginning of when more attention was being paid to this — not just for us, but across scientific publishing.
That kind of comment grates on Rossner, whose journal began using “digital image experts” to screen images in submitted manuscripts soon after it first accepted online submissions in 2001. “I really made it a crusade to try to educate other publishers and other journals about what we were doing and to convince them to take up the same effort,” he told STAT of his time at JCB, which he left in 2013. “Dozens did take up the effort of screening images before publication, including many of the big players.”

STAT (Adam Marcus and Ivan Oransky)


Publisher retracts 400 papers at once for violations of ‘peer-review process policies’

The Institute of Electrical and Electronic Engineers (IEEE) has retracted more than 400 papers “due to violations of IEEE’s peer-review process policies” after “a comprehensive internal investigation.”
The papers formed the proceedings of the International Conference on Smart Cities and Systems Engineering from 2016 through 2018. All of the meetings were reported as being held in cities in China.

Retraction Watch (Ivan Oransky)


The Great Inflation: How COVID-19 affected the Journal Impact Factor of high impact medical journals

The rapid surge of COVID-19 publications emphasised the capacity of scientific communities to respond against a global health emergency, yet inflated IFs create ambiguity as benchmark tools for assessing scholarly impact. The immediate implication is a loss in value of and trust on journal IFs as metrics of research and scientific rigour perceived by academia and the society. Loss of confidence towards procedures employed by highly reputable publishers may incentivise authors to exploit the publication process by monopolising their research on COVID-19 and encourage them towards publishing in journals of predatory behaviour.

Journal of Medical Internet Research (Panagiotis Giannos and Orestis Delardas)


Publication ethics during the Covid times: Reflections on research integrity, authorship, peer review and editorial policies

In short, strict adherence to COPE and ICMJE guidelines by the authors, reviewers and editors, close monitoring by the Research Integrity Office of the institutions and the independent critical scientific evaluation by the vigilant readers are essential for upholding public trust in the scientific establishment. Though rapid decisions are desirable especially during situations such as pandemics, editors should stay true to journal policies, maintain high standards of peer review and transparency.

The National Medical Journal of India (Kuttiatt VS, Menon RP, and Kumar A)


Now — Attempted Bribery

In essence, the individual is offering the editor $1,000 he can pocket personally if he expedites the review and acceptance of a paper… Have you or your editors been offered bribes to expedite the review and/or acceptance of papers? If so, I’d like to see more evidence of this, the techniques and prices involved, and who is originating the attempted grift. Please send what you have to me via email. I will anonymize when necessary, and if there is a sufficient response, I’ll share more examples down the road.

The Geyser (paywall; Kent Anderson)


The Time Has Come to Start Swimming Upstream: How Meaningful Engagement with Authors Early in the Research Process Can Yield Significant Benefits to Publishers

Publishers can, of course, rely on their brand reputation and wait by their inbox for a flood of submissions. However, publishers who take active interest in helping authors further upstream have much to gain. Not only can editors help build trust by shepherding authors through processes that may be intimidating or unfamiliar to them, but they can also proactively attract authors searching for guidance who have yet to decide on a publisher, thus increasing the quantity and quality of manuscripts they receive. This doesn’t have to mean considerable investment in acquisitions, but can simply include the establishment of strategic partnerships with author service and tool providers designed to empower authors.

The Scholarly Kitchen (Avi Staiman)


A paper used capital T’s instead of error bars. But wait, there’s more!

Mere days after tweets went viral pointing out that the purported error bars in one figure of a paper were really just the capital letter T, the publisher has marked it with an expression of concern.

Retraction Watch (Marcus Banks)


And finally...

Thank you for reading until the end. Hopefully you found this newsletter to be helpful. Please do share this email with your colleagues if you think they would benefit from reading it too. The sign up page is here:

https://journalology.ck.page

Until next week,

James

Journalology

The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.

Read more from Journalology

Hello fellow journalologists, The Journalology newsletter has been rather quiet in recent months; I had surgery at the start of September, which took some time to recover from. I’m in the final stages of migrating the newsletter to Substack, which is designed for writers rather than email marketers. This should help Journalology to reach a wider audience and will allow me to offer a paid subscription option further down the line. Substack is a social media platform and, like all such...

Subscribe to newsletter Hello fellow journalologists, The hottest topic of the moment is publishing integrity in a world being changed (for good and bad) by AI. This email follows a different format to normal. I’ve pulled together the key news stories and announcements that were published over the past month on this theme. I’ve excluded opinion pieces, otherwise this email would be much, much longer. The title and text that follow are extracts from the sources. None of the text is my own; my...

Subscribe to newsletter Hello fellow journalologists, Here’s the gist of what’s happened in scholarly publishing in the past week. The full length version of Journalology will return later this month. Thank you to our sponsor, Digital Science Digital Science is excited to launch the Dimensions Author Check API — a powerful tool that enables publishers to evaluate researchers’ publication and collaboration histories in seconds, directly within existing editorial or submission systems.Built on...