The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.
Last week I included a table of Glassdoor scores for some of the largest publishers, with the caveat:
N.B. the publishing teams are included in each society’s rating, rather than split out.
IOP Publishing, the publishing arm of the Institute of Physics here in the UK, has its own rating on Glassdoor, which I somehow missed when I was putting together the table. Here’s the updated table, with the Institute of Physics (rating = 3.3) removed and IOP Publishing (rating = 3.9) added in its place. The table in the online version of last week’s newsletter has been corrected to mirror the table shown below.
AIP Publishing also has its own score of 4.8, which would put it at the top of the leaderboard, but since ’n = 3’ I decided to leave the parent organisation there instead (with the agreement of an AIPP representative).
The last thing scholarly publishing needs is a new ranking system; there are many ways that Glassdoor ratings can be gamed. This table should be interpreted very carefully — reading the comments on Glassdoor is probably the best way to go if you’re considering jumping ship.
Coaching to support newly promoted managers
The bespoke Journalology one-to-one coaching programme is especially helpful for publishing professionals who are transitioning into new leadership roles and need support to hit the ground running.
The Journalology executive coaching programme is designed for editorial and publishing executives who want support from someone with real-world experience who can provide editorial and business insight, help improve decision-making, as well as hone leadership and management capabilities.
It can be lonely at the top, but it doesn’t need to be. Get the support you deserve.
This is perhaps the most flagrant example, but we argue that it indicates problems with your editorial processes that are not going to be fixed by AI. The only ways an article like this can have been published are either through editorial negligence or outright malpractice. For it to be negligence would require a remarkable degree of professional incompetence from a handling editor. The possibility of malpractice, would mean there is a corrupt handling editor who bypasses the peer review process entirely or willingly appoints corrupt peer reviewers to approve the manuscript. We appreciate that some papers that we and others have reported have been retracted, but in other cases blatantly fraudulent papers can take years to be retracted or to receive any appropriate editorial action.
BishopBlog (Dorothy Bishop et al)
JB: This open letter is signed by 23 academics, many of whom are well known research-integrity sleuths. The letter is addressed to Chris Graf, the “Research Integrity Director, Springer Nature and Chair Elect of the World Conference on Research Integrity Foundation Governing Board”, which seems slightly odd to me. The Editor-in-Chief of Scientific Reports has ultimate responsibility for the journal’s content, with a large support network, including Chris, assisting him.
I should start off by addressing my own conflict of interest here. I had publishing responsibility for Scientific Reports between 2012 and 2014 and also between 2019 and 2021; some of the problematic papers listed in the open letter were published on my watch.
The in-house editorial team that works on Scientific Reports is excellent and cares deeply about integrity; I have a huge amount of respect for them. The challenges they face are significant, though, primarily because of the scale of the operation and the type of paper that’s often submitted.
Last year Scientific Reports published 22,000 research articles, more than any other journal; it has already surpassed that number year-to-date (source: Dimensions, Digital Science). The sleuths point to around 100 articles on PubPeer with potential integrity issues that were published between 2017 and 2023. These were identified using “tortured phrases”. In that period Scientific Reports published around 150,000 research articles, so only a tiny proportion have been identified as potentially problematic to date.
To be clear, I’m not making excuses or trying to justify publishing papers that are clearly fraudulent. A journal’s core responsibility is to maintain the standard of the scholarly record. We shouldn’t be relying on unpaid sleuths to act as whistleblowers and to hold journals to account. Journal publishers are getting paid to do a job and that job needs to be done properly.
As I have documented in this newsletter before, large journals often rise and then fall. They can become victims of their own success. According to my back-of-the-envelope calculations, Scientific Reports is currently receiving around 1000 submissions a week. The worst ones are desk rejected by in-house editors but many of the remaining papers have low scientific interest, even if they can be considered methodologically sound, which makes it hard to find suitably qualified academic editors and peer reviewers willing to take them on.
The industrialisation of research publishing — catalysed by pay-to-publish business models and publish or perish research cultures — has had many unfortunate side effects. For me one of the most troubling is the devaluing of the role of the editor.
Academic editors often have a weak allegiance to a large journal; many handle the occasional paper so they can add ’editor’ to their CV. The sleuths say that Scientific Reports now has 13,000 editors, which means that each editor, on average, handles around 4 articles a year. Frontiers in Neuroscience, to take another example, has 10,391 editors on its masthead (the majority of which are ’review editors’); the journal has published 14,576 articles since it launched in 2007.
Editors, like physicians, get better with practice. If I get sick I want to be treated by a doctor nearing retirement, not someone who has just left medical school. Editors get good at their craft by handling lots of papers and by learning from more experienced peers in a supportive environment. That’s just not possible when they only handle a few papers a year, working outside of a tight-knit traditional editorial board structure.
The journal needs to show what editors handle each paper (which is currently invisible), because it's likely that a small number of them are responsible for an outsize fraction of the problem. And Springer Nature needs to take more robust steps when junk papers, junk beyond the shadow of a doubt, are brought to their attention. There is just no way that some of these things could have passed any honest, competent human peer review without being flagged. The reviewers - if there are any - who have let these things be published need to be flagged themselves.
We are now at a point where editors, at the very least, need to be named on published papers, especially on high volume journals. PLOS and Frontiers have done that from the start and others should follow suit. Persistent identifiers need to be used so that their editorial efforts can be accurately tracked over time. We need editors to take public ownership of the papers that they accept for publication. If that makes it harder to find editors to handle a paper, so be it.
We should all be clear that this open letter is just the start. Many more journals will come under the spotlight in the future because new tools will make it easier to identify problematic published papers. Scientific Reports hit the sleuths’ radar early because of the sheer volume of articles it publishes. Smaller journals will receive more scrutiny soon, especially when the ratio of ’red’ to ’green’ papers is high and easily visible on a publicly available dashboard.
Publishers need to correct the scientific record quickly and efficiently, and also use technology (and human beings!) to prevent the problem from getting worse. Brand reputations are on the line. Executives would do well to remember that the bottom line is also on the line, as the Hindawi experience made abundantly clear.
To help them in this complex and challenging effort, Signals is launching a free manuscript submissions check service for publishers to evaluate the research integrity profiles of their submissions. By giving all publishers, including smaller societies and specialist publishers, access to these essential evaluations, we can build a robust research integrity network that will help prevent the publication of articles from paper mills and other bad actors, protecting the scholarly record.
Signals (press release)
JB: Prevention is better than cure. When it comes to detecting fraudulent research at submission publishers need to collaborate not compete. The Signals team should be getting lots of incoming calls off the back of this announcement.
Optical and Quantum Electronics, a Springer Nature journal, has retracted more than 200 papers since the start of September, and continues issuing retraction notices en masse.
According to the notices, which have similar wording, the retractions come after the publisher identified problems with the articles including compromised peer review, inappropriate or irrelevant references, and nonsensical phrases, suggesting blind use of AI or machine-translation software.
The proof-of-concept pilot will assess a broad sample of articles against three different data sharing policies from the Taylor & Francis and F1000 portfolios, to determine how well the automation functions as compared to existing manual editorial checklists. Checks range from the relatively simple tasks such as confirming a data availability statement is present and accounts for all the datasets contained within the article, to more detailed items such as data licensing and file formats, repository use, accurate links, and persistent identifiers.
Plan S should keep running beyond 2025, the review said, as it has become “an influential stakeholder in fostering an international conversation on the best possible ways to achieve accessibility, affordability and equity in scholarly publishing”.
The initiative should aim to reach new geographies, not necessarily by expanding its member base but potentially through less formal mechanisms.
It should support initiatives to reform research assessment as well as “innovative and equitable” publishing models like the diamond model, which avoids author fees for open access publishing by supporting venues using long-term financing.
It should also further promote international collaboration on authors retaining rights over their papers, which the review said is a “potentially game-changing” development.
Research Professional News (Frances Jones)
JB: You can read the cOAlition S announcement here and the full report here. I haven’t had a chance to read the report properly yet, so I won’t comment further on whether the recommendations are reasonable. It’s worth noting that the report describes itself as independent, but was commissioned by cOAlition S (via the European Science Foundation-Science Connect). Can consultants ever be truly independent? It takes a brave consultancy to bite the hand that feeds it, especially if the report is made public.
The collaboration will see ACS Publications joining the SDG Knowledge Cooperative, an initiative led by Kudos wherein publishers join forces to help more people find, understand and act on research that will help drive progress towards achieving the SDGs.
ACS has identified 7 goals that are “foundational to the work of the chemistry community”; Kudos will help identify articles published within ACS Journals with the greatest impact potential in terms of driving innovation, changing attitudes and, behaviours, and accelerating better outcomes. Selected articles will be summarized for broader audiences, and actively promoted to groups including industry, educators, policy makers and the media.
Supporters of double-blind peer review — in which the authors’ identity is withheld from the referee — will find much ammunition in the archive. Many early reviewers refer to the character of the authors and their relationship with them.
For a 1950 paper that discussed ‘anisotropic elastic continuum’ by mathematician James Oldroyd, geophysicist Harold Jeffreys wrote: “Knowing the author, I have confidence that the analysis is correct.”
By contrast, physicist Shelford Bidwell didn’t mince his words about the author of a 1900 article on ‘colour sense’. Bidwell had lent apparatus to author Frederick Edridge-Green, but writes: “I was prepared to find that his new paper was rubbish; and it turned out to be rubbish of so rank a character that no competent person could possibly take any other view of it.”
ResearchGate, the professional network for researchers, today announced a significant new upgrade to its Journal Home product for publishers, with the introduction of the Open Access Agreement Upgrade (OAAU).
This new feature enables publisher partners to identify and effectively communicate with relevant researchers about funding for publishing that they are eligible for through Open Access Agreements (also referred to as Transformative Agreements, Transitional Agreements or Read & Publish Agreements) – driving greater awareness and uptake of open access publishing.
ResearchGate (press release)
JB: This part of the press release is worth pondering:
In a survey of over 2,000 researchers globally, ResearchGate found that nearly half (48%) were not at all familiar with such agreements. Even those that do know about Open Access Agreements struggle to understand if they are eligible, or how to access funding when they are.
I will be a panelist on a Council of Science Editors webinar on transformative agreements next month. One of the take-home messages will likely be that TAs are complex for sales teams, for librarians, and for researchers. The goal of the CSE session is to provide practical, actionable tips for editors whose journals are included as part of a transformative agreement. I hope to see some of you there.
Wiley AI Partnerships: A Co-Innovation Program combines Wiley’s high-quality scientific content, global academic and institutional partnerships, market research, and brand strength with cutting-edge AI applications from partner companies. The program kicks off this month with its first partner, Potato, an AI research assistant powered by peer-reviewed literature that enables high-quality science. Wiley and Potato will collaborate to advance the application and reproducibility of science with tools that deliver well-defined research protocols.
Wiley (press release)
JB: You may also be interested in this video from Frankfurt (or you can read the press release instead). The YouTube description says:
Josh Jarrett, SVP of AI Growth at Wiley, will share key highlights from a Wiley study on the applications of AI in research, and discuss Wiley's vision for the future of AI in publishing. He emphasizes a human centric approach, where AI acts as a tool to free up creativity and critical thinking. With partnerships across industries, Wiley is in a unique position to help shape a future built on collaboration and shared innovation.
We’re excited to announce the launch of PeerJ Open Advances in Zoology, a new journal committed to addressing the most pressing challenges to animal life through innovative, high-quality research and commentary. As part of the PeerJ Open Advances series, this new journal prioritizes openness, collaboration, and equity, making participation in scientific communication accessible to all by being free to read and free to publish.
PeerJ Blog (announcement)
JB: PeerJ was acquired by Taylor & Francis, a commercial publisher, earlier this year. It’s therefore noteworthy that the journal is “free to read and free to publish”. From an editorial perspective, this paragraph from the ’About’ section is interesting:
Each submission will be evaluated not just for its scientific merit, but for its potential to address one of the journal’s “Global Challenges” - key questions in zoology that are defined and prioritized by the Editorial Board. The decision of what to publish will be based on the significance and quality of the research, not the reputation, affiliation or location of the author. Through rigorous peer review and transparent publication processes, each article published in the journal will make a measurable impact.
Papers with titles or abstracts that mention certain artificial intelligence (AI) methods are more likely to be among the top 5% most-cited works in their field for a given year than are those that don’t reference those techniques, an analysis has found. These papers also tend to receive more citations from outside of their field than do studies that don’t refer to AI terms.
But this ‘citation boost’ was not shared equally by all authors. The analysis also showed that researchers from groups that have historically been underrepresented in science don’t get the same bump in citations as their counterparts do when they use AI tools in their work — suggesting that AI could exacerbate existing inequalities.
Societies must also ask themselves whether they have been too quick to throw in their lot with a publishing partner instead of capitalising on what makes them unique. Over the last couple of decades, many society trustees were swayed by the offers on the table from large commercial publishers seeking to take on publication of their journals. Today, it’s by no means clear that these arrangements were always in the long-term interest of the societies in question. As the American businessman Robert Kiyosaki has said, ‘It’s not how much money you make, but how much money you keep, how hard it works for you, and how many generations you keep it for.’ For example, we found that 24 UK societies have ceased to publish peer-reviewed journals in their own right since 2015. Many of these have chosen to outsource their publications to a partner, but digging into the financial performance of seven societies that outsourced their publishing in both 2015 and 2023 revealed a mean decline in their publishing revenues of 30%. By contrast, of eight societies that were self-published in both 2015 and 2023, all but one were able to grow their revenues from publishing, with a mean growth figure of 12%.
UKSG (Rob Johnson)
JB: You can read the full report here: You don’t know what you’ve got till it’s gone: the changing landscape of UK learned society publishing. Figure 5, reproduced below, is particularly interesting. The publishing arms of the Institute of Physics and the Royal Society of Chemistry performed the best, financially speaking, which is just as well because they also contribute the most revenue, in relative terms, to their parent organisations. I was surprised to see so many of the clinical Royal Colleges sitting in the bottom left of the graph.
Different publishers are situated differently for this decision. For example, a subscription focused publisher will be more concerned about whether access through an LLM will over time reduce the economic value of access to the version of record. By contrast, a pure Gold open access publisher may be more indifferent, if the scholarly need for — and therefore economic value of — certification is not likely to disappear in the near term. At the same time, it is possible that both will assess that direct access to the version of record will remain vital for scholarship and economically valuable over the long run.
The Scholarly Kitchen (Roger C. Schonfeld)
JB: You can view the (very helpful) tracker here. The accompanying Google Sheet is the one to bookmark.
Due to its novelty and rapidly spreading, worldwide impact, SARS-CoV-2 placed unprecedented pressure on medical publishers and the traditional structure of scholarly journals publishing. The present review sought to explore the changes that occurred in medical publishing in response to the COVID-19 pandemic while also revealing potential weak points. Although the decisions made by many journals in early 2020 to expedite peer review and publication were noble, the quality of the medical literature suffered, with potential effects that include loss of public trust in scientific publishing (Blum, 2024), and additional work is necessary to ensure that the publication of relevant information during the next health crisis is better managed.
The perception of OA in China has also shown a discernible improvement between the survey of 2021 and that of 2024. When asked ‘how has your attitude towards open access changed in the past two years?’, the percentage of authors who felt less positive towards gold OA was 24% in the 2021 survey (n = 603) but had fallen to 15% in 2024 (n=1,379). This improvement cannot be attributed to differences in sampling characteristics of the two surveys, but a real change in perceptions.
There are few empirical studies of OA publication models, including OA’s impact on academic, economic, and societal measures. Existing studies often focus on limited measures of OA impact (eg, citation rates and reporting quality) or specific clinical areas, and randomized trials are rare. More systematic evidence is needed on topics critical to current discussions (eg, how the impact of OA varies by the type of OA model, the costs and impact of funder mandates, reasons for variation in APCs, the extent to which APCs are a barrier to publication, how to address the increase in predatory journals, and whether OA directly improves science). Mandates and policies should be based on solid evidence, not just ideology, but the lack of peer-reviewed studies suggests that researchers and funders need to focus on these topics.
JAMA Health Forum (Kathryn A. Phillips)
JB: For me, the most important of these is “whether OA directly improves science”. It’s also hard to disagree with “Mandates and policies should be based on solid evidence, not just ideology”. We can live in hope.
A chatbot-like tool powered by artificial intelligence (AI) can help people with differing views to find areas of agreement, an experiment with online discussion groups has shown.
The model, developed by Google DeepMind in London, was able to synthesise diverging opinions and produce summaries of each group’s position that took different perspectives into account. Participants preferred the AI-generated summaries to ones written by human mediators, suggesting such tools could be used to help support complex deliberations.
You can read the article in Sciencehere. Hopefully this new tool will be able to solve disputes related to selective journals, rights retention strategies, and open peer review by the time the next issue of Journalology hits your inbox. In that regard, I’m taking some time off next week, so issue 98 will likely be delayed, giving Google DeepMind plenty of time to solve these seemingly intractable problems. If DeepMind takes 7.5 million years to come back with an answer of ’42’, then I will be very disappointed.
The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.
Subscribe to newsletter Hello fellow journalologists, When I started this newsletter back in August 2023 I wasn’t sure I’d make it to issue 10 let alone issue 100. And yet, by some miracle, here we are. There have been times when I wished I’d never started writing Journalology, generally at 6 am on a Sunday morning when there’s a blank sheet in front of me. However, looking back over 100 issues, it’s been an enjoyable and educational experience. I learn something new every week; hopefully you...
Subscribe to newsletter Hello fellow journalologists, This newsletter is on the cusp of hitting the three figure mark. It’s almost as exciting as the turn of the millennium. (Are these available outside of the UK? If not, the title of this newsletter will make no sense at all.) Thank you to our sponsor, Digital Science Writefull uses AI to automate language and metadata tasks and make these scalable. Writefull’s Manuscript Categorization API automatically scores and categorizes manuscripts by...
Subscribe to newsletter Hello fellow journalologists, I took some time off recently to celebrate a significant wedding anniversary, so we’ve got 2 weeks’ news to catch up on. Grab a coffee and skim through the newsletter; a lot has happened in the past fortnight. I’m able to invest time and energy into the newsletter because of the sponsors’ financial support. Thanks are due to Digital Science and Scholastica, which are sponsoring the next four issues of the newsletter. Please do read their...