Hello fellow journalologists,
When I started this newsletter back in August 2023 I wasn’t sure I’d make it to issue 10 let alone issue 100. And yet, by some miracle, here we are. There have been times when I wished I’d never started writing Journalology, generally at 6 am on a Sunday morning when there’s a blank sheet in front of me. However, looking back over 100 issues, it’s been an enjoyable and educational experience. I learn something new every week; hopefully you do too.
This issue is being sent to 4664 subscribers, who live in 69 countries, and signed up using 864 unique email domain names. Many of you work for large commercial publishers, but the tail is long and more people are signing up with university affiliations, which is great to see.
My goal at the start of the year was to hit 5000 subscribers by year end. That seems like a stretch now, but if you value this newsletter please do send it to your department, or to your editorial board, and encourage them to subscribe. With your help, perhaps I’ll be able to hit that milestone this year.
I spend 6-8 hours a week writing Journalology. I’m self-employed, so the financial support of the sponsors makes the newsletter viable. Please join me in thanking the following organisations for backing Journalology over the past year:
- Academic Publishing in Europe
- Association of Learned and Professional Society Publishers
- Cassyni
- ChronosHub
- Council of Science Editors
- Digital Science
- The Editorial Hub
- Kotahi
- Morgan Healey
- Nicky Borowiec Design & Brand
- Origin Editorial
- Scholastica
- Siliconchips Services
I have plans to develop Journalology further next year. More on that soon. But for now, please read this message from Digital Science, which is about to launch a new research integrity tool.
Thank you to our sponsor, Digital Science
Dimensions Author Check transforms the process of reviewing researchers’ publication histories and networks to check for research integrity issues.
In record time, you can thoroughly review the publishing history of a researcher and the people they have collaborated with, to spot any unusual activities such as retractions, expressions of concern, or atypical collaboration patterns.
Drawing from the most comprehensive research integrity dataset available, Dimensions Author Check offers unmatched transparency into publishing and collaboration histories, accessible through an intuitive and visual dashboard.
To learn more, visit our website or contact the Digital Science Publisher Team.
|
News
The decision could put eLife’s financial viability at stake, says Randy Schekman, the journal’s founding editor, who left its editorial board after opposing the new publishing model. The open-access journal charges authors whose manuscripts it reviews $2500 each, a key source of revenue. “Not that I give a damn about impact factor, but … its sudden withdrawal will precipitate a drop in submissions,” he says. “I’m afraid they’re going to lose, big time.” Other journals that have lost their impact factor—which is based on average citations to a journal’s papers—have subsequently seen fewer published articles.
Science (Jeffrey Brainard)
JB: For the first time ever, I find myself in violent agreement with Randy Schekman. I’ve written enough about eLife in recent weeks and won’t repeat the same arguments here. However, I will “wildly speculate” that Damian Pattinson is likely to be wrong:
In 2022 MDPI’s International Journal of Environmental Research and Public Health was the third largest journal. It lost its impact factor and was delisted from Web of Science in March 2023 and hasn’t fared well since. The graph below comes from Digital Science’s Dimensions tool and plots the number of research articles over time.
Clarivate has said that it will only index eLife’s content in Web of Science if the publisher provides a feed that excludes papers that would have been rejected by a conventional journal. If eLife chooses to do that, then perhaps the loss of an impact factor will be reduced. Do authors care more about impact factors or about being indexed in Web of Science? Do eLife authors care less about impact factors than an average academic? Does eLife care more about idealism than pragmatism?
eLife’s 2023 annual report was published back in September. The journal incurs roughly £7m in costs a year, of which around £3m are covered by grants. Will the three funders be willing to stump up additional cash if (when) publication fee revenues drop?
The table in the latest financial statement shows what costs were incurred in 2023:
Salaries + payroll taxes + employee benefits + occupancy totals £3.43m, which presumably covers the 52 salaried members of staff listed on the website (average cost of £66,000 ($82,500) per employee; this is a back-of-the-envelope calculation as some staff are likely to be working part time and new staff could have been added to the masthead in 2024).
’Editorial costs’ add an extra £1.48m; these presumably cover the stipends for the academic editors. So roughly £5m of the £7m costs are overhead. The cost base for most journals is people, not technology.
The graph below shows the volume of research articles published by eLife (source: Dimensions, Digital Science). The 2024 dotted line is year-to-date.
What effect would a decline in revenues have on the ability to invest? Frontiers had to fire 600 staff at the start of this year because revenues fell fast. Will eLife’s funders step in to cover any shortfalls? We shall have to wait and see how this plays out. Idealism is all well and good, but pragmatism pays the bills.
Recent research by Rob Johnson and Elle Malcolmson of Research Consulting has highlighted a concerning trend: a rapid decline in self-publishing societies in the UK. But how does this reflect the global picture? What challenges are societies facing internationally? And most importantly, how can learned societies be best supported in navigating these changes?
This survey aims to answer these crucial questions by creating an international evidence base that will benefit the society publishing community and those who support them.
Research Consulting (Rob Johnson)
JB: Rob shared an early version of this survey with me for feedback. The questions it asks are important for all stakeholders, but especially for society publishers. The survey is long, but the topic deserves that degree of depth. On the plus side, the survey will be open until 24 January, so you'll have plenty of time to complete it. Please note this point:
We’re seeking one response on behalf of each society publisher, so if you’re in a large society, please coordinate with your colleagues before submitting a response. The survey assumes in-depth knowledge of a society’s publishing activity and strategy and is designed to be completed by individuals in leadership positions.
The editor in chief of Scientific American, the US’s oldest magazine, has announced her resignation after a series of online posts in which she called some Donald Trump supporters “fascists” and “bigoted”.
In a post on Bluesky on Thursday, Laura Helmuth, who was originally appointed as the magazine’s editor in chief in 2020, said: “I’ve decided to leave Scientific American after an exciting 4.5 years as editor in chief. I’m going to take some time to think about what comes next (and go birdwatching).”
Helmuth’s resignation comes after a series of expletive-filled posts on 5 November – election night – in which she criticized those who voted for Trump.
The Guardian (Maya Yang)
JB: Scientific American is owned by Springer Nature. One of my regrets is that I didn't get to know Laura when I worked at Springer Nature, mainly because of the pandemic. I heard many good things about her from friends and colleagues; this turn of events is incredibly sad both for her personally and for Scientific American.
The saga ended yesterday, when Research Square officially marked the preprint as “withdrawn,” though it remains accessible, as the platform’s guidance notes, if not searchable on the site. The full text and author list remains, with a note that the authors no longer stand by its conclusions.
The experience has left the participants with mixed feelings. Schneider is still outraged that he was listed as an author on a preprint whose contents he does not endorse. The article has racked up more than 4300 views.
Science (Jennifer Couzin-Frankel)
JB: This is journalistic story telling at its best. I thoroughly enjoyed reading this article.
CPOP [Collective Pathway to Open Publishing] builds on the success of Taylor & Francis’ OA (Read & Publish) agreements, which now help researchers at over 1,000 institutions to publish OA. Some HSS journals with author communities in regions where agreements are common now publish most of their articles OA. However, meeting the criteria for conversion to a full OA journal under an Article Publishing Charge (APC) model remains a challenge due to limited OA funding in HSS fields for articles not covered by an agreement.
CPOP aims to solve this challenge by combining funding from OA agreements with ‘read’ income from subscriptions and other reading access fees. Through CPOP these funding sources can be used collectively to support the journal’s conversion to OA, one volume at a time, without any APCs.
Taylor & Francis (press release)
JB: I’ve read this press release a few times and I’m still confused. That’s probably down to my own inadequacies, but I can’t get Annette Thomas’ voice out of my head: “keep it simple, stupid”. You can read more about the model here.
Glasziou sees the situation as a balance of two forces: AI tools could help scientists to produce high-quality reviews, but might also fuel the rapid generation of substandard ones. “I don’t know what the net impact is going to be on the published literature,” he says.
Some people argue that the ability to synthesize and make sense of the world’s knowledge should not lie solely in the hands of opaque, profit-making companies. Clark wants to see non-profit groups build and carefully test AI tools. He and other researchers welcomed the announcement from two UK funders last month that they are investing more than US$70 million in evidence-synthesis systems. “We just want to be cautious and careful,” Clark says. “We want to make sure that the answers that [technology] is helping to provide to us are correct.”
Nature (Helen Pearson)
JB: This is a (typically) excellent news feature by Helen, who is currently writing a book called What to Believe. According to her website:
What to Believe is the story of the global movement promoting the use of research evidence to reveal what works.
In the 1990s, medicine was transformed by the idea that evidence—in the form of randomized controlled trials—was the best way to guide medical practice, rather than doctors’ differing opinions or anecdotal experience. This concept sparked a revolution known as evidence-based medicine, which has become the predominant form of medicine today. Only by rigorous testing can we know whether a therapy helps or harms.
Since then, the idea that research evidence should be used to show what works has swept into government policy, education, policing, conservation, international development, management and more. What to Believe will relate the important, timely, untold story of this movement. It is the counter-narrative to misinformation’s rise, and a practical guide for anyone seeking a way to make rational decisions in a world of conflicting information.
The book isn’t available yet, but the topic is more relevant than ever. If you haven’t read Helen’s first book, The Life Project, you should (rated 4.5 stars on Amazon with over 200 reviews). It would make a great Christmas present. And, no, I am not on commission — just trying to help out an incredibly talented friend (who did not ask me to write this endorsement).
We report the results of an international survey examining perceptions of reproducibility. Almost 3 quarters of participants reported that they felt there was a reproducibility crisis in biomedicine. The concern appears to apply to biomedicine overall, but also specifically to clinical research, in vivo research, and in vitro research (11% or fewer participants indicated that they think more than 80% of papers in each category were reproducible). Researchers agreed that a variety of factors contribute to irreproducibility; however, the chief characteristic that most participants indicated “always contributes” to irreproducible research was a pressure to publish.
PLOS Biology (Kelly Cobey et al)
JB: I stopped including a Journal Club section in this newsletter, primarily because I feel uncomfortable covering research papers that I don’t have time to read in depth. However, this research article comes from the Centre for Journalology in Ottawa, Canada, so deserves coverage for that reason alone.
If DeepMind makes claims about AlphaFold3 in a scientific publication, “I and others expect them to also share information about how predictions were made and put the AI models and code out in a way that we can inspect”, Gitter adds. “My group’s not going to build on and use the tools that we can’t inspect.”
The fact that several AlphaFold3 replications had already emerged shows that the model was reproducible even without open-source code, says Pushmeet Kohli, DeepMind’s head of AI for science. He adds that he would like to see more discussion about the publishing norms in a field increasingly populated by both academic and corporate researchers.
Nature (Ewan Callaway)
JB: Nature got heavily criticised for publishing the AlphaFold3 paper without insisting that the code be made freely available. Journal editors compete fiercely for the best papers and it can be tempting to bend the rules for ground-breaking papers. This week’s news is a very welcome development.
Other news stories
Bioscientifica Partners with TNQTech for Journal Production
‘All the red flags’: Scientific Reports retracts paper sleuths called out in open letter
Introducing Emerging themes - the latest Scopus AI innovation
GeoScienceWorld Appoints Matt Hudson as Director of Publishing and Publishing Services
Paperpal introduces Overleaf Extension & Centralized User Management Dashboard
Special issues: The roles of special issues in scholarly communication in a changing publishing landscape
Nonprofit Annual Reviews launches Katina Magazine for librarians, publishers, and vendors
PLOS Partners with CLOCKSS to Safeguard its Journals: A Milestone in Open-Access Preservation
Thank you to our sponsor, Scholastica
Looking for a better journal editorial-management system? There's no need to settle for expensive, overly complex legacy software.
The Scholastica Peer Review System has the features you need for smooth submissions and streamlined editorial workflows in a user-friendly interface designed for speed and efficiency — all at an affordable price.
Trusted by hundreds of journals, Scholastica empowers editorial teams to spend less time on admin tasks and more time on quality publishing (plus authors and reviewers will love it!).
Visit our website to learn more.
|
Opinion
The challenge to the reader is even greater when the Green open access text is a version other than the VoR, such as a preprint or author manuscript. The reader must do all of the same searching, endure all of the same workflow disruptions, etc., while also spending time and effort to determine what version of the article the text is and if it is an acceptable substitute for the Version of Record, and also then read without the benefit of layout and copyediting.
The Scholarly Kitchen (Lisa Janicke Hinchliffe)
JB: Back in the day, The Scholarly Kitchen used to host lots of heated engagement from its audience. Or at least that’s my perception. These days, many authors hear crickets, but not for this post...
Kathleen Shearer, Executive Director, COAR wrote a response, which you can read here.
The future of scholarly communication will be very different from the current one. It will be about supporting the continual evolution of knowledge in a globally, interconnected ecosystem; about documenting the record of versions (not the version of record); and about supporting diversity of languages, research outputs, and formats beyond the ‘pdf’ in order to fuel scientific advancements.
Supporters of preprints call for greater moderation of preprints and the orchestration of peer review of preprints. While more stringent moderation of preprints may be a rational response, others worry that tightening controls on preprints could hinder the speed and openness that make them valuable in the first place.
To help inform this debate, we at SFU’s ScholCommLab have in recent years been conducting research on various aspects of preprints. Taken together, these studies offer empirical evidence to inform a nuanced vision for the future for preprints—with or without peer review.
Impact of Social Sciences (Natascha Chtena, Juan Pablo Alperin, and Alice Fleerackers)
Other opinion articles
How the SDGs Are Shaping the Research Agenda, and What Publishers Need to Know and Do
Writing assistant, workhorse, or accelerator? How academics are using GenAI
Ignoring journal metrics on CVs is an act of privilege (paywall)
You don’t know what you don’t know: how Signals helps publishers understand research integrity in their journals
With Pod on Our Side: Using Podcasts to Drive Journal Engagement
Guidelines for ethical use and acknowledgement of large language models in academic writing (paywall)
Transformative Agreements Are a Blind Alley
Kitchen Essentials: An Interview with Anita Bandrowski of SciCrunch
5 Benefits of publishing with Wellcome Open Research JB: One of the benefits is “greater visibility”. Er, are you sure Wellcome Trust? See issue 58 where I show how low the usage is for articles published on that platform compared with other OA articles (56 page views per article in 2023 is very low indeed).
ResearchGate and MIT announce Journal Home agreement
And finally...
The hardest part about running this newsletter isn’t creating the content, it’s getting the emails delivered. Email servers are often suspicious of newsletters, which can end up in spam or not delivered at all. 700 people have tried and failed to sign up to Journalology, mainly because of the vagaries of email servers.
One way to teach servers that an email is safe is for recipients to reply to the message. If you’ve got this far, please do just that — hit [REPLY]. A one word response is enough. It will increase the chance that colleagues working at your institution will be able to successfully subscribe to Journalology in the future.
Until next time (unless Journalology gets sent to Room 101),
James