Journalology #87: The Empire Strikes Back



Hello fellow journalologists,

We’re now firmly in the second half of 2024; I wanted to take stock and assess what’s happened, at a macro level, in the first six months of this year. This was prompted, in part, by a LinkedIn post this week by another consultant, Rob Johnson. He posed some questions that I didn't know the answer to, so I did some digging.

I used Dimensions (Digital Science) to search for research articles published in the first 6 months of this year (H1 2024) and compared publishers’ output with the first 6 months of last year (H1 2023).

[Dimensions is sponsoring this newsletter, but they did not ask me to do this analysis.]

All of the data that follow had the filters: Article [publication type] AND Research Article [document type]. This strips out preprints, as well as reviews, correspondence, editorials etc.

The table below shows the total output of research articles for the top 20 publishers by research article volume. MDPI and Frontiers have seen significant drops in output in H1 2024 compared with H1 2023. Elsevier and Springer Nature have grown the most (Wiley’s numbers do not include the Hindawi portfolio, which decreased in size).

There’s no way of telling if the papers that would have been published by Hindawi, MDPI and Frontiers etc. have now gone to more traditional outlets. But what has changed, if the numbers are accurate, is the choice of business model.

The following table shows the number of research articles published in the first 6 months of 2023 and 2024 by access model. Dimensions splits out Gold (i.e. articles published in fully OA journals) and Hybrid (OA articles published in hybrid journals). I have lumped together what Dimensions calls Bronze, Green and Closed into “Subscription” in the below table.

Some articles, especially from smaller publishers that don’t have automated feeds set up, may not be included in Dimensions yet, which may explain the drop in total output in H1 2024.

If I redid the search in 6 months I would expect the H1 2024 numbers to be higher, as more content is indexed. Could the drop in Gold articles be an artefact of slow indexing? That seems unlikely: why would fully OA journals be indexed slower than hybrid or subscription journals? The ratios are unlikely to change, even if the absolute numbers do. Could some of the decrease be because as an industry we’re doing a better job of identifying paper mill content? Possibly, although there’s no specific evidence for that.

Fully open access journals appear to be less popular now than they were this time last year. The fall from grace of Hindawi, MDPI and Frontiers is only part of the story (they collectively fell by 55k articles, but the total drop is 132k); perhaps the bad publicity has affected a wider tranche of fully OA journals. Hybrid OA output did not change much, so transformative agreements can't be responsible for the drop in Gold OA.

Having said that, some publishers are managing to grow their Gold portfolios, especially Springer Nature and Elsevier. The table below shows the 20 largest publishers of Gold OA articles (i.e. research articles published in fully OA journals, using the Dimensions definition of Gold).

Next, I sorted the data for the Gold OA articles (i.e. those published in fully OA journals) using the ANZSRC subject classification scheme. There are clear differences between subject area. Subjects outside of the natural and applied sciences have a larger relative fall in output in fully OA journals. The physical sciences is the only subject area to maintain output.

The top 20 publishers of OA articles in hybrid journals are shown below. Transformative agreements are likely contributing to this growth (lots of green bars and few red ones), at least for the larger publishers.

Let’s now turn our attention to the 20 largest journals, ranked by research article output in H1 2024.

The four MDPI journals have all reduced their output between H1 2023 and H1 2024. IEEE Access has dropped by 34% too, for reasons that I don’t understand.

Springer Nature has three journals in the top 10: Scientific Reports, Cureus and Nature Communications.

The growth in Nature Communications is especially interesting (to me at least). 42% growth is incredible, especially considering that its APC is likely the highest on the list. It could publish more than 10,000 articles this year, which at $6790 a pop would generate $68m in revenue (in practice, likely less than that because of waivers and exchange rate effects). By contrast, Science Advances published 1054 research articles in H1 2023 and 1085 articles in H1 2024.

With a fair wind, Scientific Reports may end up publishing over 30,000 research articles this year; it grew 41% between the two time periods. Heliyon’s output has almost doubled, which suggests that Elsevier has finally, after a long gestation, got the journal to fire on all cylinders (it’s now part of the Cell Press portfolio).


What can we conclude from this analysis?

  1. Elsevier and Springer Nature have increased their research article output considerably in H1 2024 compared with H1 2023 (14% and 18% growth respectively). As expected, MDPI, Frontiers and Hindawi have seen their output shrink; I was surprised by just how much MDPI has fallen (this may explain why they have been advertising their impact factors so heavily recently).
  2. Fully OA journals seem to be less popular now than they were a year ago, but Elsevier and Springer Nature are bucking the trend. Some of the largest fully OA journals seem to be doing particularly well (unless they’re published by MDPI)
  3. Total hybrid OA is similar in H1 2024 compared with H1 2023. However, most of the 20 largest publishers of OA hybrid articles have increased their output.
  4. The larger society publishers in the chemical and physical sciences seem to be holding their own, by and large. ACS, RSC, IOPP, APS and AIPP have all increased their research article output (IEEE is the exception that proves the rule). Most of the large biomedical societies partner with a commercial publisher; it’s not easy to tease out how they are performing using Dimensions’ standard filters.

What do you make of these data? Please hit [reply] and send me your thoughts.

Thank you to our sponsor, Digital Science

Discover the potential of Dimensions Landscape & Discovery - the innovative app that delivers ready-to-explore visualizations of data from the world's largest linked research database.

Dimensions Landscape & Discovery makes it quick and easy for publishers to make data-driven decisions and gain strategic insights into key players, successful collaborations, adjacent fields, funding and outcomes in any research area.
Visit our website or get in touch to learn more.


News

Hijacked journals are still a threat — here’s what publishers can do about them

How can researchers, publishers and editors avoid journal hijacking? The first step, says Abalkina, is for journal publishers and editors to take the time to build a strong and secure website. She references one journal, Seybold Report, which lacked any online presence when it became the target of hijackers on five occasions, she says. The journal has since launched a website. Abalkina also advises contacting Retraction Watch to help keep her hijacked journal database up to date, and encourages researchers to use it as a resource.

Nature Index (Jackson Ryan)

JB: You can access the hijacked journals database here.


Are retraction notices becoming clearer?

The authors judged retraction notices using criteria including whether notices were freely available and easily accessible, whether they mentioned investigations carried out by journals or institutions, and whether they were candid and transparent about the reasons for retraction. The study authors also checked if notices specified whether the journal or the authors pulled the study and if the journal and all the authors agreed to the retraction.
They found that retraction notices released by Springer Nature became clearer between 2010 and 2020, while those issued by Wiley did not. For some of the criteria examined, the Wiley notices became worse over the time, the study found.

C&EN (Dalmeet Singh Chawla)


Informa PLC 2024 Half-Year Results

At the same time, we are seeing Open Research volumes grow and our investments in processes and platforms over recent years are enabling us to capture and monetise more of this output. The combination of robust revenues in traditional areas with acceleration in Open Research is driving the overall rate of underlying growth higher. Additional AI-related revenues mean we will comfortably exceed our stated ambition to deliver 4% underlying revenue growth in 2024 but excluding this new category of revenues, we remain on track to meet our original target for underlying operational growth.

Informa (report)

JB: Taylor & Francis is part of the Informa group. Revenues in the first half of 2024 were £301.1m, up from £283.4m in the same period last year. Adjusted operating margin increased to 31.4%


Academic authors 'shocked' after Taylor & Francis sells access to their research to Microsoft AI

Authors have expressed their shock after the news that academic publisher Taylor & Francis, which owns Routledge, had sold access to its authors’ research as part of an Artificial Intelligence (AI) partnership with Microsoft—a deal worth almost £8m ($10m) in its first year.
The agreement with Microsoft was included in a trading update by the publisher’s parent company in May this year. However, academics published by the group claim they have not been told about the AI deal, were not given the opportunity to opt out and are receiving no extra payment for the use of their research by the tech company.

The Bookseller (Matilda Battersby)


So you got a null result. Will anyone publish it?

Journals that offer registered reports are not spread equally across disciplines; most are in psychology and, more recently, neuroscience. Few physical-science journals offer the format — even though null results, such as the failure of the Large Hadron Collider near Geneva in Switzerland to find new subatomic particles since the Higgs boson, have been an important part of progress. Emily Sena, a translational-medicine researcher and metascientist at the University of Edinburgh, UK, says that few academics in preclinical fields have been keen to try the format, especially when there is already so much red tape before researchers can begin their experiments.

Nature (Max Kozlov)


Estimating global article processing charges paid to six publishers for open access between 2019 and 2023

We therefore curated and used an open dataset of annual APC list prices from Elsevier, Frontiers, MDPI, PLOS, Springer Nature, and Wiley in combination with the number of open access articles from these publishers indexed by OpenAlex to estimate that, globally, a total of $8.349 billion ($8.968 billion in 2023 US dollars) were spent on APCs between 2019 and 2023. We estimate that in 2023 MDPI ($681.6 million), Elsevier ($582.8 million) and Springer Nature ($546.6) generated the most revenue with APCs. After adjusting for inflation, we also show that annual spending almost tripled from $910.3 million in 2019 to $2.538 billion in 2023, that hybrid exceed gold fees, and that the median APCs paid are higher than the median listed fees for both gold and hybrid. Our approach addresses major limitations in previous efforts to estimate APCs paid and offers much needed insight into an otherwise opaque aspect of the business of scholarly publishing. We call upon publishers to be more transparent about OA fees.

arXiv (Stefanie Haustein et al)

JB: This would be more useful if we had subscription revenues to compare it with.


Big Ten Academic Alliance + Next Generation Library Publishing Announce the Launch of a Pilot Project

The Big Ten Academic Alliance (BTAA) is excited to announce a partnership with the Next Generation Library Publishing (NGLP) project. This collaboration aims to test and enhance infrastructure solutions for academy-owned scholarly publishing programs that are open source, community-led, and rooted in academic values. The pilot project will create a unified discovery layer for the diverse publishing platforms of participating libraries, presenting them as a single, shared collection of open access materials.

Big Ten Academic Alliance​ (announcement)

JB: The BTAA universities publish over 100,000 articles per year, which they say is 15% of US research articles. For that reason alone, this initiative is worth following.


New RFI - Recommendations on the Use of AI in Scholarly Communication

Following the launch of the Peer Review Quality Assessment page of the Toolkit in June, the EASE Peer Review Committee invite comments and suggestions on the draft of new Peer Review Toolkit entry, Recommendations on Use of AI in Scholarly Communication.
Response to this request for information (RFI) is voluntary and may be submitted anonymously till 15th September 2024 using the feedback form.

EASE (announcement)


IOP Publishing extends scope of Progress in Energy as part of prestigious new journal series

IOP Publishing (IOPP) is extending the remit of its journal Progress in Energy by accepting high-impact original research articles alongside its well-recognised review programme. Progress in Energy is part of a developing new Progress In series™, that builds on the reputation of IOPP’s prestigious journal Reports on Progress in Physics and is designed to unite communities looking to advance and explore progressive research across the physical sciences.

IOP Publishing (press release)


Other news stories

AI Summarization now available for multiple documents

Ingo Rother appointed Managing Director of the Berlin Institute for Scholarly Publishing

Metadata schema development plans

AI ‘deepfake’ faces detected using astronomy methods

Editorial and Publishing KPI review

Publishers need to monitor the most important key performance indicators for their business. It’s easy to fall into the trap of collecting lots of data that are “interesting”, but aren’t actionable.

Do you have measures in place that allow you to spot bottlenecks in the publishing pipeline? Do you know which levers to pull to get a journal back on track or to make it even more successful?

You can engage me to review how you measure your business and to help implement the cultural changes needed to ensure that everyone in the team is on their A-game when it comes to monitoring and evaluating the financial and non-financial performance of a portfolio.

Opinion

Scientific Publishing: How and why eLife selects papers for peer review

So there is a clear tension: we would like the outputs of the eLife peer-review process to be of the highest quality, but preparing such assessments is easier for work that editors and reviewers are willing to engage with. Therefore, one clear reason for why we only peer review some submissions is to ensure the quality and rigour of what we produce.
We must continue to focus our efforts on those papers where the outputs of the eLife peer-review process are most valuable. This is not the same as deciding which papers are “the best”. For example, there might be value in reviewing work that is controversial in a particular field of research. However, it does mean concentrating on papers where we believe the scientific content will be most inspiring to eLife readers.
Just as editors and reviewers prefer to review work that is of interest to them, as readers we scientists pay more attention to reviews of work that has the potential to change the way we think about a question. Therefore, if we are going to change scientific publishing for the better, we will do so faster by focusing our efforts on such papers.

eLife​ (eLife Editorial Leadership, eLife Senior Editors, eLife Early Career Advisory Group)

JB: This editorial attempts to explain to authors and readers how the eLife editors select papers for peer review. A lot depends on how you define the word “best”. To my mind, work that “has the potential to change the way we think about a question” is very similar to what I would think of as the “best” work.

There’s a common misperception that journals with high impact factors select work based on how many citations a paper is likely to get. I’ve never seen editors work like that, not least because it’s hard to predict future citations. Most editors understand the limitations of measuring success based on citations or impact factor; they select papers that they think their readers will be interested in or that will move a field forward.

The eLife editors have a finite amount of time to spend on the journal and so the proportion of papers that are sent out-to-review is likely to be preset. If the quality of the submissions improves, papers that would have been peer reviewed previously would now be rejected (unless extra editorial resource is added to the journal). In other words, like any journal, the threshold for peer review is likely based on editorial resource rather than the actual quality / potential interest of the papers.

The editorial says:

This approach of moving beyond binary accept/reject decisions and more fully conveying the views of expert reviewers has many advantages.

Let’s be clear: there’s still a binary accept / reject decision at play in the eLife model. Papers that don’t get peer reviewed get “rejected” and papers that are selected for peer review get “accepted” (if the authors choose to proceed to publication after peer review). The editors are choosing which papers to peer review, so there’s clearly an editorial threshold to meet in order to get published in the journal. To claim otherwise is disingenuous, in my opinion.

The only way to be truly fair would be to randomly select articles to be peer reviewed, but understandably the editors don’t want to do that.

It’s a shame that the editorial didn’t respond to Dorothy Bishop’s proposal to select papers for peer review based on reading the introduction and methods section alone.

eLife is recruiting a new Editor-in-Chief. The closing date is August 30, 2024. You can read the advert here.


Improving Methods Reporting in the Life Sciences

This month, a new report entitled “Promoting Reusable and Open Methods and Protocols (PRO-MaP): Recommendations to Improve Methodological Clarity in Life Sciences Publications” was issued by the European Commission’s Joint Research Centre. Developed by a working group of researchers, institutions, publishers, and funders and shaped by an extensive consultation process, the report includes recommendations for concrete actions that can be taken by each stakeholder group. This post summarizes the recommendations for publishers and editors, and invites all life science publishers to join this community-based effort.

The Scholarly Kitchen​ (Marcel LaFlamme)


Collapse of scientific standards at MDPI journals: a case study

Having said that, though, my strongest criticism is for the MDPI publishers, who have encouraged an explosion in "special Issues" of their journals, with scant scrutiny of the quality of published articles - each of which brings in an Article Processing Charge of CHF 2600 (around £2,200, or US $2,900). The proliferation of anecdotal reports in the literature gives ammunition to those who wish to promote all kinds of unevidenced treatments: they can point to these "peer reviewed" papers as evidence of scientific respectability.

BishopBlog (Dorothy Bishop)

JB: This blog post is rightly critical of the n=2 study. If it was published on a preprint server, and not in a journal, would it be any less problematic? The fact that the paper was published in a peer reviewed journal certainly gives it a veneer of respectability, but surely the journalists who covered this article should have known better. To be clear, I’m not defending the journal’s editors. This paper is a shocker.

Dorothy Bishop was also quoted in this Retraction Watch article ‘A proper editor would be horrified’: Why did a pediatric journal publish articles on the elderly? (not an MDPI journal).


GenAI just over a year on - scoring my predictions

Language Generation
We said: This would accelerate the number of fake papers that we might receive.
Since then: We have not observed a significant increase, probably due to the high quality of our titles.
Status: Probably there are more opportunities for us to help our authors, than threats.

Ian Mulvany blog

JB: Ian runs the technology team at the BMJ. I was surprised to read that AI hasn't increased the number of fake papers that the BMJ receives.


Woefully Insufficient Publisher Policies on Author AI Use Put Research Integrity at Risk

I’d suggest building a flexible and quick-acting consortium of publishers (one can dream, right?) who would develop living guidelines across the scholarly publishing industry to make clear, coherent, and consistent policies for authors regardless of where they plan on submitting their work. One advantage of the risk register is that it doesn’t necessarily require publishers to learn every tool, but rather requires them to better understand the landscape of tools and creates high-resolution buckets of tools that can be continually monitored and regulated.

The Scholarly Kitchen​ (Avi Staiman)

JB: What is this strange utopia that you dream of, Avi?


Other opinion articles

New Directions Seminar: Reverse Roundtables Kept the Post-Lunch Conversations Going!

A Look Under the Hood of Scopus AI: An Interview with Maxim Khan


And finally...

The future of science publishing

“Last year was a really pivotal year in scholarly publishing since lots of people who were really pushing gold open access for many years are now thinking, ‘Oh, what beast have we created?’ ” says James Butcher, an independent publishing consultant in Liverpool, England, who writes the newsletter Journalology. “It plays into the hands of the big corporates because it’s all about scale.”
Gold OA creates incentives for journals to publish as many papers as possible to make more money. Some publishers, often referred to as gray OA publishers, have been criticized for exploiting the gold OA model to churn out high volumes of low-quality studies.

C&EN (Dalmeet Singh Chawla)

The publication date for this story is July 29; but it first appeared on July 27. Perhaps the article should have been called: Back to the future of scientific publishing? I hadn't heard of the term “grey OA publishers before”. Had you?

Until next time,

James


113 Cherry St #92768, Seattle, WA 98104-2205
Unsubscribe · Preferences

Journalology

The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.

Read more from Journalology

Subscribe to newsletter Hello fellow journalologists, This week’s issue of the newsletter is packed full of scholarly publishing goodness. You’ll learn about two new editorial benchmarking surveys, a report on data sharing, changes to the way Scopus selects journals for inclusion, and much more. Enjoy! Thrive in 2025: join the Journalology coaching programme We’ll work together to clarify your strategic vision and map out a plan to create journals that are impactful editorially and...

Subscribe to newsletter Hello fellow journalologists, We’re in the final month of 2024 and there’s still no sign of Frontiers’ annual progress report for 2023. There’s a page dedicated to the latest impact data, though. Oh well, there’s always next year. The biggest story of last week was an announcement from the Indian government about the One Nation One Subscription agreement with 30 international publishers. Meanwhile, the furore around Clarivate’s decision not to award eLife an impact...

Subscribe to newsletter Hello fellow journalologists, This week’s lead story comes off the back of a tip from a Journalology reader. I monitor the news wires each week, but it’s impossible to pick up everything. Tips are always welcome. Just hit [reply] to one of these newsletters. Contributions are gratefully received. Thank you to our sponsor, Digital Science Dimensions Author Check transforms the process of reviewing researchers’ publication histories and networks to check for research...