Journalology #93: Baddies



Hello fellow journalologists,

Are scholarly publishers a force for good? Many academics certainly don’t perceive commercial publishers that way. This week we consider whether (with apologies to Stephen Sondheim):

We ain’t no delinquents,
We’re misunderstood.
Deep down inside us there is good!

The same topics come up again and again in this newsletter: open access equity, research integrity, peer review, reproducibility and so on. This week is no different. There’s a lot to catch up on. Enjoy.

Thank you to our sponsor, ChronosHub

With the ChronosHub platform, publishers can now offer their authors a unified interface seamlessly integrated with existing systems, delivering a modern and cohesive experience.

From manuscript submissions to transfer acceptance, publishing agreement signing, and author charge payments, all author actions are accessible in one central location.

Read our latest announcement here or contact Romy Beard to schedule a demo.

News

New tool to assess equity in scholarly communication models

A new online tool designed to assess the equity of scholarly communication models is launched today at the OASPA 2024 conference. The “How Equitable Is It” tool, developed by a multi-stakeholder Working Group, comprising librarians, library consortia representatives, funders and publishers, and convened by cOAlition S, Jisc and PLOS, aims to provide a framework for evaluating scholarly communication models and arrangements on the axis of equity.
The tool, which was inspired by the “How Open Is It?” framework, is targeted at institutions, library consortia, funders and publishers, i.e. the stakeholders either investing or receiving funds for publishing services. It offers users the opportunity to rate scholarly communication models and arrangements across seven criteria.

Plan S (announcement)

JB: Improving equitable access (for authors and readers) to scholarly publishing is one the biggest challenges that we face as a community, and any initiative that works towards that goal should be welcomed.

Equity is important, but it can’t be the only consideration. Financial sustainability is important too, for example, as is quality control. A journal could get high scores on all seven of the How Equitable Is It? criteria while publishing low quality work. Is that better than a journal that publishes high quality articles, but in a less equitable way? Is a journal that generates a good financial return that allows the publisher to invest in new technologies that benefit its authors in the future “worse” than a journal that is fully equitable, according to this rating scale, but is unable to invest to create a better user experience?

There are no easy answers here. A balance needs to be struck. The only way we will be able to do that is if all stakeholders are willing to engage meaningfully with each other. The following publishers were involved in creating this new rating score: PLOS, Cambridge University Press, eLife, European Mathematical Society, PeerJ, and SciELO. It’s not clear if any of the large commercial publishers applied to join the working group, but were not selected by the conveners.

You can use the tool here and read the draft framework here. The press release notes:

Stakeholders in the academic publishing ecosystem are encouraged to test the tool and provide feedback until the 28th October 2024 via the form
https://coalitions.typeform.com/Equity-Feedback to help refine the criteria and increase its utility. The Working Group will review all input and publish a revised version in early 2025.

Scitility PBC, a public benefit corporation, unveils argos, the revolutionary tool that makes visible the hidden patterns and cascading effects of retractions on scientific research.

Scitility PBC (www.scitility.com), a technology first company whose mission is to improve scientific integrity, announced today the launch of its debut product:
argos.
argos, a groundbreaking free-to-individual solution, that tackles the growing challenge of scientific malpractice. By organizing over a billion pieces of publicly available information, argos empowers publishers, researchers, institutions and funders in their goal of improving scientific integrity.
With its ability to swiftly analyze vast amounts of retraction data, argos promises to transform the way the scientific community addresses the cascading effects of fraudulent activities. argos provides the tool to find the hidden patterns caused by retractions in order to improve trust and integrity in science.

Scitility (press release)

JB: Publishing fraudulent research (for example, from paper mills) has the potential to negatively impact a journal’s or publisher’s brand, especially if the journal is subsequently delisted by Clarivate.

At the ALPSP meeting a few weeks ago the shortlisted products for the Innovation Awards had a very strong focus on research integrity: Papermill Alarm, Signals, and Morressier’s Integrity Manager were all on the list.

Now there’s a new sheriff in town called argos (not to be confused with the UK retailer of the same name), which was created by three former Springer Nature executives.

You can play around with the dashboard here. The below graph (filtered for articles published between 2020 and 2024) shows the number of what argos considers to be high risk articles (blue bars) and retractions (red bars; using the Retraction Watch database) by publisher.

Hindawi, unsurprisingly, has the largest red bar (retractions), but in many ways the blue bars are more interesting. Elsevier has more potentially problematic articles during this time period than any other publisher, according to argos. It’s not clear from this graph what proportion of papers are problematic; Elsevier publishes a lot of papers after all.

MDPI and Springer also fare poorly compared to their immediate competitors. Springer Nature imprints appear a few times on this list presumably because each one uses its own DOI prefix: Springer Science + Business Media; BioMed Central; Springer Nature; and the Nature Portfolio. The ratio of blue to red is higher for MDPI than for any other publisher (MDPI has not retracted many papers).

The argos dashboard also includes a view on which journals have published the most retractions. The table below has the filter restricted to 2024.

I’ve been in contact with Jan-Erik de Boer, one of the founders, who was on the Springer (and Springer Nature) executive team for over a decade. Jan-Erik says:

We have a special account type for journalists / sleuths which is also free! That will give them access to our full portfolio analysis which shows the high risk articles in circulation and the reasoning behind why we think it is a potentially problematic article.

Research integrity tools such as argos will make it very clear which journals and publishers have been hit by paper mills or that have cut too many corners. This is undoubtedly a good thing.


Multi-Journal Submission 2024 update

Introduced in September 2020, Cell Press Multi-Journal Submission (MJS) is the only service at a major publisher that allows authors who submit their manuscripts to be simultaneously considered by the relevant journals among the 30 participating life, medical, and—most recently—physical science titles. Four years in, the editorial team, including a core of 6 dedicated MJS editors, now handle over 400 submissions a month, helping everyone save time by avoiding sequential editorial evaluation and review at multiple rounds.

Cell Press (announcement)

JB: More publishers should try this kind of approach, especially those with a well defined transfer cascade. One of the biggest problems with peer review is the duplication of effort as articles get submitted, rejected and resubmitted. That’s the main reason why I’m broadly supportive of the eLife experiment.

Nature Portfolio experimented with Guided OA (my brainchild), which got closed down soon after I left in February 2022. As far as I can tell there was never a formal write up of the experiment; this editorial in Nature Physics provides some background. I still believe that the core idea is a good one.


MDPI Elevates Peer Review with AI and Innovative Technology

At MDPI, we actively pursue innovative technologies to ensure high-quality manuscripts are published in our journals. Ethicality, currently in testing, will fortify our submission process by helping to identifying AI-generated publications, self-citation, and non-related references, among other capabilities.
Visit this space, 24–26 September, to learn about two AI-driven tools MDPI has developed to advance research integrity and peer review, Eureka - Reviewer Recommender and Online Proofreader, as well as our in-house online submission system, SuSy.

MDPI (press release)


SciScore to check for adherence to experimental rigor and reporting requirements in several American Heart Association journals

SciScore, an automated and multifaceted tool based on artificial intelligence, evaluates scientific manuscripts for adherence to several key reporting criteria for rigor and reproducibility. To improve the reproducibility of the research published, SciScore is now integrated into the American Heart Association’s journal submission workflow. Results from the initial pilot, which ran over a year’s time for the Association’s Circulation Research journal, showed that the average monthly scores for submitted manuscripts rose dramatically by 4% month-over-month, which helped the journal to improve its rigor and reproducibility.
Moving forward, the following American Heart Association journals will have SciScore integrated in their journal submission process: Arteriosclerosis, Thrombosis, and Vascular Biology (ATVB), Circulation Research, Stroke, Hypertension, and Stroke: Vascular and Interventional Neurology.

SciCrunch (press release)

JB: Any editor who has tried to navigate reporting guideline checklists will know how time consuming they can be. Tools like this one could potentially help to reduce the administrative burden.


Celebrating global societal impact: Introducing the Citation Laureates 2024

The Citation Laureates program is rooted in a data-driven approach to recognizing research excellence. We look to exceptionally highly cited papers – those that have typically amassed more than 2,000 citations over decades in the fields recognized by the Nobel Foundation. Citation counts serve as a powerful indicator of a researcher’s influence, reflecting how often their work is referenced by peers in advancing new discoveries. Nobel recognition typically occurs 20 or more years after a breakthrough discovery – highlighting a need for the Institute for Scientific Information (ISI)™ to navigate the long timeframe between research and its societal application.

Clarivate (David Pendlebury)

JB: You can view the citation laureates website here and browse the winners here.


Other news stories

IOP Publishing expands its open access environmental portfolio with Environmental Research: Water JB: This is an example of portfolio expansion done well.

Exclusive: One university’s three-year battle to retract papers with fake data

My identity was stolen by a predatory conference

Learned Publishing Invites Submissions 2025 Themed Issue

First paper retracted in string of studies using the wrong medication name

New Open Access Agreement Between the University of California and Taylor & Francis

ALPSP Awards Winners Announced at the Annual Conference 2024

Editorial board member quits over journal’s handling of plagiarized paper

News & Views: Register now for Delta Think’s 2024 Market Sizing Update Webinar

CSIC, Peer Community In, and COAR advance innovative publishing model

Clarivate Launches Generative AI-Powered Primo Research Assistant

Kriyadocs and DataSeer announce partnership to enhance data sharing and research integrity in publishing.

Charlesworth Partners with JAMA Network to Boost Visibility of their Publishing Program in China through WeChat

Exploring the New ACS Publishing Center

Wiley Announces CFO Transition

Lorraine Estelle retires from leadership of Information Power

Editorial board member quits over journal’s handling of plagiarized paper

Springer Nature’s IPO shows academic publishing’s resilience

We cover what you Discover JB: Is this a typo? “We covet what you Discover” would work better, no? (Sorry, SN friends: couldn't resist).

‘A Massive Scandal': Does a Landmark Lawsuit Against Academic Publishers Have Legs?

Thank you to our sponsor, Origin Editorial

At Origin Editorial we aim to ignite change in scholarly publishing. Our full complement of editorial services and consultations, as well as our reporting platform, Origin Reports, will elevate your journals.

Whether it’s through peer-review management services, or consultations on publisher RFPs and transitions, workflow and technology optimization, or reporting and analytics, you’re in good hands with Origin.

Visit our website today to learn more or contact Erin Landis, Managing Director, to ask how Origin can help meet your needs.

Opinion

Can AI be used to assess research quality?

Research quality is difficult to define, although there is a general consensus that good quality research is underpinned by honesty, rigour, originality and impact. There’s a wide variety of mechanisms, each operating at different levels of the research ecosystem, to assess these traits, and myriad ways to do so. The bulk of research-quality assessment happens in the peer-review process, which is, in many cases, the first external quality review performed on a new piece of science. Many journals have been using a suite of AI tools to supplement this process for some time. There’s AI to match manuscripts with suitable reviewers, algorithms that detect plagiarism and check for statistical flaws, and other tools aimed at strengthening integrity by catching data manipulation.

Nature Index (Jackson Ryan)

JB: One of the most important decisions that editors make is whether to send a paper out for peer review. New AI tools will likely be designed to help make that process more efficient. However, there are some significant challenges that will need to be addressed, especially with regards to perpetuating inequity.

This week the 67 Bricks CEO, Jennifer Shivas, wrote an opinion piece: Unlocking AI’s Potential for Publishers: A Strategic Approach to Maximizing ROI. Here’s an excerpt:

One of the most powerful applications of AI is in forecasting future trends. By analyzing historical data, journalism, and other sources, AI can either power internal decision-making (helping publishers predict what content or topics will resonate with audiences in the coming months or years) or provide predictive forecasts to customers, creating new revenue and making you essential in their workflow. These forecasting tools are not just about staying ahead of the curve—they’re about making informed decisions that can drive long-term growth, for you and your customers.

Editors have long used bibliometric tools to try to spot trends and hot topics; AI tools that can help them to do this better (and more efficiently) will be a competitive advantage. Jennifer goes on to say:

Editorial Decision-Making Enhancements: Speed and accuracy are crucial in today’s fast-paced publishing environment. AI tools can accelerate editorial processes by helping teams make faster, more informed decisions. From identifying trending topics to optimizing publication schedules, these tools can broaden a publisher’s coverage while reducing time to market.

There’s a risk that AI tools will perpetuate the status quo. After all, they use past data to inform the future. As a community we need to be careful about how we use these tools. More of the same isn’t always a good thing. I touched upon this topic in the webinar I recorded last week (see “And finally...” section for more information).


Ask the Community - Thoughts on a Class Action Lawsuit Brought Against Scholarly Publishers

Articles are not only the major (and almost exclusive) unit measuring research achievements for researchers but under Gold OA, they have become the unit of revenue for publishers. This combination creates only two models for competition: (i) competition for volume – the ‘article economy’, or (ii) competition for prestige – the ‘prestige economy’. The competitive models currently co-exist and neither one is good for research. We have a vicious circle of some publishers seeking to increase profit through article growth, researchers focused on advancing their careers through prestige publication, and universities desperate to climb the rankings.
Unfortunately, even if successful, this lawsuit will do little to address this toxic mix. If researchers truly want to reduce the market power of the largest publishers, they need to break their dependence on them for purposes of evaluation and career advancement. But the burden for driving systemic change cannot lie with researchers alone. I sincerely hope that this will serve as a moment for introspection in the scholarly publishing community, and then for us to move forward as active partners in building a publishing ecosystem that better reflects 21st century research practices and values.

The Scholarly Kitchen​ (Alison Mudditt)

JB: This story will run for a while and it’s useful to have a mix of voices that provide different viewpoints. The general consensus is that the lawsuit misses the mark, but Alison’s underlying diagnosis is correct. Academics are angry about scholarly publishing. Some of that distrust is misplaced, in my opinion, but won't change any time soon unless we make publishing less of a black box. Academics dislike high profit margins, which is understandable, but most have very little understanding of the benefits of the capital investments that publishers make each year. They also often don’t see the care and passion that publishing professionals put into their craft. The vast majority of the people that I’ve worked with over the past two decades want to make a positive contribution to global society; they don’t wake up every morning thinking of new ways to fleece academia. Having said all that, Alison is right: introspection is helpful and we should consider: Are we the baddies?


Other opinion articles

How and why do the life sciences cite social sciences?

How is generative AI changing social science?

Mental Health Awareness Mondays - Ask the Mental Health Editors: Part One

Mental Health Awareness Mondays - Ask the Mental Health Editors: Part Two

Takeaways from ALPSP 2024 - 67 Bricks

Nobody’s Listening — S2O and APCs

Accelerating action towards the Sustainable Development Goals

Enabling open access publishing in Asia and Oceania

Fake journals aren’t publishers at all – they are dishonest reformatters

Peer Review Week 2024: Ask the Chefs

Coaching to support newly promoted managers

The bespoke Journalology coaching programme is especially helpful for publishing professionals who are transitioning into new leadership roles and need support to hit the ground running.

The Journalology executive coaching programme is designed for editorial and publishing executives who want support from someone with real-world experience who can provide editorial and business insight, help improve decision-making, as well as hone leadership and management capabilities.

It can be lonely at the top, but it doesn’t need to be. Get the support you deserve.

And finally...

We’re about to enter Peer Review Week, which will be a busy affair. There are a number of live events in the coming days. The menu of options can be found here and here.

A few days ago Danielle Padula and I pre-recorded a webinar, which will go live on Friday September 27. Here’s the blurb:

From research integrity specialist to AI expert to author experience coordinator — today’s scholarly journal editor has more functions to fulfill than ever before. Who is the “new journal editor” of the 21st century? And how can they keep up with the rapid pace of change? Join Scholastica’s head of marketing and community development, Danielle Padula, and Founder of the industry-leading Journalology newsletter and consultancy, James Butcher, for a water-cooler-style chat about how the role of the journal editor is transforming and what steps editors can take to stay ahead of the curve.

For the record, I won’t be doing any more “water cooler” or “fireside” chats. Rather, from now on I will be restricting my efforts to “wine-cellar waffles”; I am much more interesting and loquacious after a glass of red.

Until next time,

James


113 Cherry St #92768, Seattle, WA 98104-2205
Unsubscribe · Preferences

Journalology

The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.

Read more from Journalology

Subscribe to newsletter Hello fellow journalologists, Many of us took part in Peer Review Week, which ends today. The sheer scale of the number of events was overwhelming (see here). You can watch my own contribution, alongside Danielle Padula from Scholastica, on YouTube by clicking the play button below: We talked about how the role of a journal editor may be affected by advances in technology. (My 8-year-old son was very impressed that I’m a ’YouTuber’, but he complained: “I didn’t...

Subscribe to newsletter Hello fellow journalologists, This week I attended the ALPSP annual conference and met some Journalology readers at the event, which was lots of fun. The quality of the presentations and panel discussions was excellent; the organising committee and wider ALPSP team did a fantastic job. I’m still reflecting on what I learned, in particular what the future might hold for small society publishers in an open access world where scale wins. The star of the show was...

Subscribe to newsletter Hello fellow journalologists, Before we kick off another bumper issue I want to quickly revisit the lead story from last week: the discussion of the subscribe-to-open (S2O) model. Last week I said that EDP Sciences “would cease S2O on one of its journals” and linked through to a 2023 press release. However, I hadn’t clocked that in June 2024 the same journal, Radioprotection, announced that it was moving back to S2O. This might suggest that the S2O model worked as...