Hello fellow journalologists,
Here’s the gist of what’s happened in scholarly publishing in the past week. The full length version of Journalology will return later this month.
Thank you to our sponsor, Digital Science
Digital Science is excited to launch the Dimensions Author Check API — a powerful tool that enables publishers to evaluate researchers’ publication and collaboration histories in seconds, directly within existing editorial or submission systems. Built on the most comprehensive research integrity dataset, the API seamlessly integrates into workflows, empowering editorial and integrity teams to make informed, confident decisions. Quickly spot potential risks like retractions, tortured phrases, or atypical collaboration patterns, across researchers and their networks. Ideal for reviewing authors, editors, and reviewers at scale, the Author Check API brings speed, transparency, and trust to the publishing process.
To learn more, visit our website.
|
Out of a sample of 15,191 journals, his team estimates that the AI correctly classified 1092 as questionable but did the same for 345 journals that weren’t problematic, so-called false positives. It also failed to flag 1782 questionable journals—the false negatives. Additional tuning of the AI increased its sensitivity to questionable titles but also boosted the number of false negatives. Acuña says such trade-offs are inevitable, and the results “must be read as preliminary signals” meriting further investigation “rather than final verdicts.”
Science (Jeffrey Brainard)
JB: You can read the accompanying research article in Science Advances here. Nature also covered the story. Daniel Acuña, who created the tool, wrote about it in this blog post: Introducing: ReviewerZero Journal Monitor
The accuracy of AI content detectors varies wildly by discipline, training data, and how heavily a text has been edited, so false positives are common. If you use them, treat them as hints, not verdicts — a flag to trigger deeper checks alongside other clues — like fake references, suspect emails, manipulated images, or gibberish content. Detection tools can help identify suspect manuscripts, but they’re not foolproof and must be used in tandem with broader integrity checks.
Looking ahead, we believe the focus will move from “catching AI-generated text” to checking whether authors follow AI-use disclosure rules and editorial standards. Detection will become a routine quality-control step — no longer treated as a special integrity check. But what will still matter — and matter even more — is solid human oversight: making sure the data, citations, research methodology, and critical analysis are sound, regardless of how polished the text appears.
The Scholarly Kitchen (Hong Zhou and Marie Soulière)
Posts about research on Bluesky receive substantially more attention than similar posts on X, formerly called Twitter, according to the first large-scale analysis of science content on Bluesky. The results suggest that Bluesky users engage with posts more than do users of X.
Bluesky has more than 38 million users and shares many features with X, which fell out of favour with some scientists when it was bought by entrepreneur Elon Musk in October 2022. A survey of Nature readers earlier this year suggested that many prefer Bluesky over X to discuss and disseminate their work.
Nature (Rachel Fieldhouse)
JB: You can read the preprint here. Wired covered a different paper, which was published in Integrative and Comparative Biology back in July, in its story Scientists Are Flocking to Bluesky.
For example, one of the options in the NIH’s proposal would increase limits on APCs if the journal paid peer reviewers, but Marcum said he’s concerned that could result in some peer reviewers trying to game the system to enrich themselves. Instead, he said, “if the NIH really wants to move the needle on this, they should think about other ways to compensate reviewers.” Some of those ideas could include giving peer reviewers credit toward their grant applications, including peer review as part of grant work or requiring universities that apply for NIH grants to include considerations for their researchers to engage in peer review.
Inside Higher Ed (Kathryn Palmer)
As you can see, we’re moving in the correct direction when it comes to gold and hybrid, green isn’t changing, and bronze coverage is going backwards a bit, although it’s still pretty close to the ground truth number. Our roadmap will prioritize green and gold for the next few months at least.
OurResearch blog (Jason Priem)
JB: This is important for anyone who uses bibliometric tools to do market sizing as most of the databases use Unpaywall for their OA filter. Improved accuracy is a good thing. However, it will make it difficult to compare new analyses with old ones.
Public trust in science has become more fragile, especially after the pandemic, when people were bombarded with evolving advice and conflicting information. Publishing that brings patients into the process can help repair that trust.
Involving patients in research design and review, and clearly stating how they were involved, signals openness, humility and respect. It shows readers that this is not research done on people, but research done with and for them.
You don’t need to be a medical publisher to apply these ideas. Any publisher that creates content with real-world consequences, from education to policy to social care, can benefit from integrating lived experience.
BMJ Group (announcement)
And finally...
I enjoyed reading Amy Brand’s editorial The future of knowledge and who should control it. Perhaps you will too. Amy argues:
The question of how, and under what conditions, published science is used to train LLMs is not just about copyright. It is about who controls the future of knowledge. Do we cede authority to opaque, extractive industries with little accountability to the research community? Or do we build systems that preserve attribution, integrity, and sustainability? If we are serious about human flourishing, about evidence-based science, and about protecting the conditions under which knowledge grows, then the research community and its institutions must proceed with discernment.
Until next time,
James