Hello fellow journalologists,
This newsletter is on the cusp of hitting the three figure mark. It’s almost as exciting as the turn of the millennium. (Are these available outside of the UK? If not, the title of this newsletter will make no sense at all.)
Thank you to our sponsor, Digital Science
Writefull uses AI to automate language and metadata tasks and make these scalable.
Writefull’s Manuscript Categorization API automatically scores and categorizes manuscripts by language quality, empowering publishers to:
- Triage manuscripts based on language quality
- Assign manuscripts to the right copyeditors
- Set APCs aligned with copyediting needs
- Confidently negotiate pricing with vendors
- Evaluate the quality of copyediting work
To learn more about how this solution could streamline your editorial workflow, visit our website or request a demo.
|
News
BMJCCGG is the first in a new series of 'BMJ Connections' journals, which aim to connect the medical research community for meaningful real world impact. The journals in the series will focus on areas of rapid growth within biomedical research, providing authors with rigorous peer review and an efficient and timely route to publication. Authors will have the reassurance of the high editorial and production standards that characterise all BMJ Group publications.
BMJ Connections Clinical Genetics and Genomics (Darren Griffin)
JB: At the risk of sounding like a stuck record, commercial publishers are not the only ones doing brand extension. Society publishers are too. The phrase “...will focus on areas of rapid growth within biomedical research...” gives you an indication of the purpose of this new BMJ series.
’Publish or perish’ has historically been used to describe academic culture. But it’s also applicable to publishers, which need scale to be sustainable in an increasingly competitive OA marketplace (where revenue per article is often a lot less than under a subscription business model).
I couldn’t find a website page devoted to the new BMJ Connections series, but the editorial suggests that it’s a transfer venue:
BMJCCGG will sit alongside the Journal of Medical Genetics (https://jmg.bmj.com), which celebrates its 60th anniversary this year. The editors and editorial teams of the journals will work in a collaborative manner. Papers reporting well-conducted research that are rejected from JMG on the grounds of priority can be seamlessly transferred to BMJCCGG, if authors wish, thereby saving authors’ time and effort by not having to resubmit elsewhere and reducing editorial decision-making times.
I don’t think I’ve ever seen a 7-letter acronym for a journal before. I'm particularly impressed that a genetics journal has managed to incorporate a genetic sequence (CCGG) into its name. Will it cite this paper at some point?
It seems odd that the transfer destination has BMJ branding, but the feeder journal does not.
There’s also a landing page for BMJ Connections Oncology. It’s not clear at this point whether it will sit below BMJ Oncology in the transfer cascade. My best guess is that it will.
BMJ Open has been a successful broad-scope OA journal. The BMJ also uses the Open sub-brand for titles such as RMD Open (which is not BMJ-branded) and BMJ Open Gastroenterology (which is).
So, in summary, the BMJ-branded journals fall into one of the following categories. I’ve placed them in likely order of selectivity (my best guess):
- BMJ (OA only)
- BMJ XXX (either OA or hybrid)
- BMJ Connections XXX (OA only)
- BMJ XXX Open (OA only)
- BMJ Open (OA only)
In addition, there are non-BMJ-branded specialty journals, some fully OA and some hybrid, that sit at various levels within this pyramid (e.g. Heart and Heart Open).
So the BMJ portfolio is topped and tailed with broad-scope journals (BMJ and BMJ Open) with three layers BMJ-branded specialty journals in between, plus some non-branded journals.
The marketing team is going to have to explain to potential authors what the difference is between BMJ XXX, BMJ Connections XXX, and BMJ XXX Open. And authors are going to need to remember whether ’Connections’ and ’Open’ go before or after the clinical specialty in the journal name.
Rising cases of identity fraud and integrity breaches are challenging the scholarly community to protect research integrity without imposing unnecessary burdens on genuine contributors. In response, STM Solutions released Trusted Identity in Academic Publishing: The Central Role of Digital Identity in Research Integrity, a new report analyzing the role of digital identity in scholarly publishing and presenting a foundation for the development of guidelines and recommendations to enhance trust through technology. The report was developed by the Researcher Identity Task & Finish Group that was established last year.
STM (announcement)
JB: You can read the report here.
Price thinks paper series such as this are “definitely a better idea” than one long paper, and adds that there is a lot to be said for concise papers. He points to a 2016 paper published by the LIGO Scientific Collaboration — a conglomerate of more than 100 institutions collaborating in the search for gravitational waves — after its seminal detection of gravitational waves using instruments in Washington and Louisiana. “It’s eight pages [ten, including references] and it revolutionized astrophysics,” he says.
Nature Index (Jackson Ryan)
JB: The peg for this story is a 35-page paper published in Nature back in July. I feel queasy just thinking about the impact on the page budget.
Sleuths using the Problematic Paper Screener have flagged more than 700 articles in the journal that cite retracted or otherwise problematic papers or contain tortured phrases, nonstandard language that hints at the use of a thesaurus or paraphrasing software to avoid plagiarism detection.
Content has been retracted from at least 10 guest-edited issues, according to a Springer Nature spokesperson. The publisher assesses each article individually rather than retracting entire issues, the spokesperson said.
Retraction Watch (Ellie Kincaid)
JB: I’ll be interested to see how Springer Nature handles this case (and others like it) in the first investor earnings call on November 19.
Digital Science is pleased to announce that its flagship product Dimensions, the world’s most complete database of linked research information, is launching a beta to explore the responsible use of a new AI-based Natural Language to Query technology.
Digital Science CEO Dr Daniel Hook said: “Boolean queries were a foundational approach for navigating the limitations of 1990s search technologies. However, they can be cumbersome, technically challenging, and often fail to capture the nuance of naturally phrased questions.
“With recent advancements in large language models (LLMs), we’ve developed a new, more intuitive interface for Dimensions: Natural Language to Query (NLQ), which translates user intent directly into a search-ready format.
Digital Science (press release)
JB: The big downside of running sponsorship messages in this newsletter is that it can create a perceived conflict of interest, which may cause some readers to doubt my independence.
However, those of you who know me will understand why I would have covered this story regardless of Digital Science’s sponsorship. I’m a bibliometrics geek. Boolean queries are hard to get right and NLQ could solve a significant pain point (TBC: I haven’t tried the new functionality yet).
Purpose-Led Publishing (PLP) members AIP Publishing, the American Physical Society, and IOP Publishing are hosting multiple satellite sites for the 2025 APS Global Physics Summit. In an era where the global exchange of scientific knowledge is more critical than ever, these sites will be local, online extensions of the Global Physics Summit, making the event more accessible for all. Through these sites, scientists around the world will be able to directly engage with the cutting-edge discussions and collaborations happening in Anaheim, CA, in March 2025. In addition to participation in the summit, the PLP coalition will feature specialized presentations and workshops on the essentials of successful publishing.
AIP Publishing (announcement)
JB: This is the first public announcement of an initiative since the PLP was announced back in February. I’m very interested in how this partnership plays out because I’m increasingly of the opinion that society publishers will need to partner with each other in order to thrive in the medium to long term. The three physics publishers could pave the way for others to follow.
The study’s findings show how market expectations, priorities and budgets vary, for example by subject area or region. Comprehensive information is provided about institutions and other communities of interest with which publishers might usefully engage in expanding or fine-tuning their sustainability strategies. The substantial data set has been supplemented with expert analysis and digested into actionable recommendations ranging from quick wins to longer term programs.
Kudos (press release)
JB: The report isn’t publicly available, as far as I can tell.
Other news stories
Sage Appoints Khal Rudin as Chief Commercial Officer
Maria Castresana joins Springer Nature as new Chief People Officer JB: Shall we have a sweepstake on which member of the executive team will next leave the business now that the IPO is complete?
ASH Names New Blood Editor in Chief. JB: Last month ASH also announced the EiCs for two new Blood journals (Blood Immunology and Cellular Therapy; and Blood Red Cells and Iron).
ResearchGate and IMR Press announce Journal Home partnership for complete portfolio
More Researchers Benefit from GetFTR Links to Trusted Content
Embed ethics in research publications, editors say. JB: this story relates to this PLOS One paper.
NISO Publishes Update to Journal Article Tag Suite (JATS) Standard 1.4. JB: Webinar is being held on November 18.
Taylor & Francis Joins DAISY Consortium’s Inclusive Publishing Partner Program
Sherpa services combined into new user-friendly platform: open policy finder
Positioning Rights Retention in Europe
Clarivate Launches New Sustainability Research Solution
R2R Programme Announced
Celebrating 20 Years of Discovery with Scopus: A Journey of Innovation and Excellence
Call for Nominations to the FORCE11 Board of Directors
SSP Launches the Mental Health Awareness and Action Community of Interest Group
AI-generated images threaten science — here’s how researchers hope to spot them. JB: This news feature in Nature provides a good overview of this hot topic.
COPE strategy 2025-2028
AI outlines in Scholar PDF Reader: skim per-section bullets, deep read what you need
Thank you to our sponsor, Scholastica
Looking for a better journal editorial-management system? There's no need to settle for expensive, overly complex legacy software.
The Scholastica Peer Review System has the features you need for smooth submissions and streamlined editorial workflows in a user-friendly interface designed for speed and efficiency — all at an affordable price.
Trusted by hundreds of journals, Scholastica empowers editorial teams to spend less time on admin tasks and more time on quality publishing (plus authors and reviewers will love it!).
Visit our website to learn more.
|
Opinion
Since the artificial intelligence (AI) chatbot ChatGPT was released in late 2022, computer scientists have noticed a troubling trend: chatbots are increasingly used to peer review research papers that end up in the proceedings of major conferences.
There are several telltale signs. Reviews penned by AI tools stand out because of their formal tone and verbosity — traits commonly associated with the writing style of large language models (LLMs). For example, words such as commendable and meticulous are now ten times more common in peer reviews than they were before 2022. AI-generated reviews also tend to be superficial and generalized, often don’t mention specific sections of the submitted paper and lack references.
Nature (James Zou)
JB: Surely the biggest problem is that peer reviewers are pasting confidential manuscripts into LLMs, which will presumably use the data for future users?
A post in The Scholarly Kitchen also touched on AI and peer review (First-Hand Publishing Experiences: Researcher Panel at SSP’s New Directions Seminar):
The use of AI-power tools in peer review was a natural segue to the topic of artificial intelligence. The panelists agreed they would not be comfortable with reviews entirely generated by AI, but many were not opposed to the use of AI at some stages of the review and publishing lifecycle, as long as humans were in the loop throughout.
By suspending journal services like CUREUS, Or telling eLife that it is under investigation, I seem to see a service in retreat from the Garfield objectives of better universal standards of measurement. The danger for Clarivate is that they will end up with owning a measurement guideline which measures a shrinking segment of the market. They may feel safe while the impact factor that arises from these measurements is still linked to the publishing outcomes, vital to academic jobs and promotions. unfortunately, the things that we say will never change have a habit of changing, and when they do change, it can be very quickly indeed. And Eugene Garfield, were he amongst us now, would be the foremost scientist trying to invent the better mouse trap.
David Worlock blog
JB: If (when?) impact factors lose their importance in academic life, then the whole game changes. In the meantime, I’d argue that editors and publishers can’t afford to disregard them completely. Clarivate and other indexers could lose credibility if they index work that fails to meet basic quality control criteria.
Science also makes itself vulnerable to mistrust, and therefore susceptible to cynical attacks, when research misconduct occurs and journals, federal agencies, and universities are slow to correct the record. When principal investigators (PIs) blame their students and postdocs for problems, it creates the perception that PIs care more about protecting their reputations than serving the public interest or supporting their less powerful colleagues. Research shows that the public is more likely to trust scientists when they demonstrate a willingness to change their views on the basis of new data, as of course scientists are supposed to do. This openness can even help counteract negative perceptions that scientists might be biased by their political views. But institutions and many PIs respond to questions about their research by circling the wagons and hiding behind excuses that might protect the institution from liability. Similarly, because institutions are trying to protect themselves from political controversy, they are often unwilling to stand behind papers that are correct if the findings challenge the positions of some groups.
Science (H. Holden Thorp)
JB: Holden’s opinion is worth listening to because he’s held senior roles at universities and is now Editor-in-Chief of Science; he has a unique view on how the two types of organisation perceive, and respond to, misconduct. Publishers have to take responsibility for what they publish, but institutions also need to be accountable for what their faculty members submit and publish. Publishers can lose reputation by brushing problems under the carpet. Can the same be said for institutions?
One of the first topics that came up at the National Academy in terms of concerns about generative AI had to do with the norms of science. As scientists, we value work that is reproducible. You can put the same query into a generative AI model and get 2 different answers. Reproducibility is just not part of the generative AI way of working.
The other thing we really respect in terms of a science norm is attribution. If you are using work that was previously done, you have to cite who did it and what they said. For generative AI, this stuff comes out and you have no idea where it came from. When references are provided, sometimes they’re made up.
A third norm is transparency. In scientific publishing, we value complete transparency of how we got to an answer. What did we use? What was our thinking? What were our preconceived assumptions? We know nothing of that with generative AI.
JAMA (Marcia McNutt in conversation with Kirsten Bibbins-Domingo)
The problem is that the review process for non-profit professional society journals is the same as for the for-profit publisher journals. Any outcome from the anti-trust class action lawsuit, therefore, is likely to affect society publications, too. That could end up saving the for-profit journals from being forced to make huge payouts to reviewers – or it might end up choking off the excess revenue that their parent societies need to serve their memberships.
Times Higher Education (Sheldon H. Jacobson)
Other opinion articles
Advancing equity in research and publishing
MDPI INSIGHTS: The CEO's Letter #17 - OA Week, Basel Open Day, Beijing Graphene Forum
Supporting ECRs in peer review at Nature journals
Science communication will benefit from research integrity standards
Cathy Foley’s open access proposal for Australia
Soviet scientific publishing and the prehistory of preprints
Scholarly Metadata as Trust Signals: Opportunities for Journal Editors
The Top Ten Challenges, Needs, and Goals of Publishers – and How AI Can Help in Digital Transformation and the Open Science Movement
Publishers and Early-Career Researchers Working Together: Guiding Initiatives
Paywalls are Not the Only Barriers to Access: Accessibility is Critical to Equitable Access
|
|
The scholarly publishing environment is changing fast. Even the most seasoned publisher can benefit from independent advice. I can help you to build a successful portfolio strategy and thrive in an open access world.
|
And finally...
There’s still time to register for the upcoming Council of Science Editors Fall Virtual Symposium, which is being held on November 19 and 20. You can find the agenda here. There are four sessions; I’ll be taking part as a panellist in Topic 4:
- Transformative Agreements and Read and Publishing Deals: An Emerging Twist in the Battle for Equitable Access to Scholarly Content.
I hope to see some of you there.
Until next time,
James
P.S. Thank goodness the authors didn’t use Midjourney for this article. Rats + horn? What could possibly go wrong?!?