Journalology #79: Open Japan



Hello fellow journalologists,

Some Journalology readers will be playing catch up after attending the SSP meeting in Boston last week. They will be pleased to know that it was a light news week, so they should be able to catch up by skimming quickly through this issue.

If you have time on your hands, you could respond to the public consultation on the draft OASPA equity paper or NISO’s revised draft of the Journal Article Version document.

But first, a message from the primary sponsor, Editorial Hub, which has now sponsored six issues of this newsletter. The Journalology newsletter is free to read because of the sponsors’ support.

Thank you to our sponsor, The Editorial Hub Ltd.

As specialists in peer-review administration, research integrity, training and management The Editorial Hub Ltd provides exceptional, efficient and timely editorial support. Our unparalleled service is reliable, high-quality and continuous; 52 weeks a year.

Please get in touch and mention that you are a Journalology reader for our best rates.

News

Japan’s push to make all research open access is taking shape

The Japanese government is pushing ahead with a plan to make Japan’s publicly funded research output free to read. In June, the science ministry will assign funding to universities to build the infrastructure needed to make research papers free to read on a national scale. The move follows the ministry’s announcement in February that researchers who receive government funding will be required to make their papers freely available to read on the institutional repositories from January 2025.

Nature (Dalmeet Singh Chawla)

JB: It’s not clear from the news story whether there will be an embargo for green OA depositions or if a fully open license (like CC BY) will be mandated. Does this affect any paper with an author from Japan or only papers with Japanese corresponding authors? If anyone knows the answer to these questions and can point me to a URL, please reply to this email and I’ll include an update in the next issue.


OASPA calls for community feedback on draft recommendations to increase equity in open access

Following the background and foundations described at the start of 2024 this post reveals OASPA’s draft recommended practices to increase equity in open access (OA). These are open for feedback until 1 July 2024.
OASPA is addressing challenges of inequity in all routes to OA through this body of work. We also want to avoid erosion of trust in the very concept of OA due to exclusionary practices. There is scope for improvement in all approaches, and our recommendations cover all models of OA, across book and journal publishing, and across hybrid and fully OA publishing.

OASPA (announcement)

JB: You can read the consultation document here. Comments need to be sent to OASPA by July 1. The challenge here will be to balance idealism with pragmatism.


Announcing the State of Open Infrastructure 2024

There is so much to digest in this report, but we hope you take away the richness and opportunity that open infrastructure provides as an alternative to many of the models and tools we rely on today. We believe that open infrastructure has an indispensable and irreplaceable role in advancing a research ecosystem that prioritizes access and participation by and for all — and that anchoring our technical systems in community, interdependence, and openness are competitive advantages. The funding, policy, and technological landscape in which open infrastructure is embedded is ever-changing and varied, and it is more important than ever that we look deep into the evidence and trends to make more informed, strategic, and coordinated investments to increase the resilience and health of this invaluable ecosystem of infrastructure.

Invest in Open Infrastructure (Kaitlin Thaney)

JB: Open infrastructure is a fantastic idea in principle, but someone needs to pay for it. Companies invest millions of dollars a year in infrastructure in order to gain a commercial advantage. Will ’open’ be able to compete with that level of financial investment?


NISO’s Draft Revision of the Journal Article Version (JAV) Recommended Practice Now Open for Public Comment

The National Information Standards Organization (NISO) announced today that its draft revision of the Journal Article Version (JAV) Recommended Practice is open for public comment through July 7, 2024, at the project web page.
First published in 2008, the JAV Recommended Practice was developed to describe different versions of online scholarly content. Since then, publishing practices have continued to evolve, and with changes such as the rapid growth in preprint publications, the concept of a single version of record has become less relevant. Questions about citations for different versions and version labeling, for example, have highlighted the need for standardization of terms as well as recommendations for how to manage, track, and index multiple versions. The NISO working group was formed to address these challenges and develop a revised JAV Recommended Practice including an appendix with multiple examples illustrating a variety of use cases for those looking for guidance.

NISO (announcement)

JB: This article from 2022 (The State of the Version of Record) provides a useful overview that’s worth revisiting.


'Hidden' citations conceal the true impact of scientific research

Papers introducing concepts that have since become common knowledge are often under cited by researchers, skewing those articles’ true impact. That’s the conclusion of new study using machine learning to identify “foundational” work in science that is often not properly cited. Being able to count such hidden citations could provide more accurate bibliometric measures of impact, the study says.
The number of times a paper is cited is widely seen as marker of its scientific credibility. But some concepts or ideas are so well known that no one cites them. It would be unusual for an article on, say, general relativity to refer to Albert Einstein’s original 1915 paper on the subject. Xiangyi Meng, a physicist at Northwestern University in the US, who led the new study, calls such non-references “hidden citations”.

Physics World​ (Michael Allen)

JB: You can read the full paper in PNAS Nexus here: Hidden citations obscure true impact in science. Why are scientists obsessed with citations?!? Ah, yes, that’s why...


Biomedical paper retractions have quadrupled in 20 years — why?

The latest research, published on 4 May in Scientometrics, looked at more than 2,000 biomedical papers that had a corresponding author based at a European institution and were retracted between 2000 and mid-2021. The data included original articles, reviews, case reports and letters published in English, Spanish or Portuguese. They were listed in a database collated by the media organization Retraction Watch, which records why papers are retracted.
The authors found that overall retraction rates quadrupled during the study period — from around 11 retractions per 100,000 papers in 2000 to almost 45 per 100,000 in 2020. Of all the retracted papers, nearly 67% were withdrawn due to misconduct and around 16% for honest errors. The remaining retractions did not give a reason.

Nature (Holly Else)

JB: You can read the article in Scientometrics here: Biomedical retractions due to misconduct in Europe: characterization and trends in the last 20 years.

The big question, which is addressed at the end of the news story, is whether the increase in retractions is due to better detection, a greater willingness to issue retractions, or an increased rate of offending.

The regional differences are worth noting:

The study also identified the four European countries that had the highest number of retracted biomedical science papers: Germany, the United Kingdom, Italy and Spain. Each had distinct ‘profiles’ of misconduct-related retractions. In the United Kingdom, for example, falsification was the top reason given for retractions in most years, but the proportion of papers withdrawn because of duplication fell between 2000 and 2020. Meanwhile, Spain and Italy both saw huge rises in the proportion of papers retracted because of duplication.

How are researchers responding to AI?

Most academic researchers and research authors say they are using artificial intelligence (AI) tools in their research practice, despite concerns over the loss of critical thinking skills, respect for intellectual property (IP) rights, and mistrust in AI providers.
We recently conducted a survey of over 2,000 researchers across geographies, subject disciplines—including Humanities, STM, and Social Sciences—and different career stages to hear directly from the research community about how they are reacting to and using AI in their work.
The results reveal the key considerations in researchers’ decisions to engage with AI, including what excites and concerns them, and how they already use—or plan to use—tools already available to them.

Oxford University Press​ (press release)

JB: You can read the report here.


Other news stories

ICMJE Seeking New Member Journals from Sub-Saharan Africa and South Asia

Editage brings expert services and AI products for researchers on one platform

Springer Nature's latest regional research integrity survey shows growing global pattern around researcher needs for data training

STM comments on the report on “access and re-use of scientific publications and data”

​Kriyadocs partners with Global Campus to Enhance Peer Review Workflows​

ResearchGate and Bentham Science Publishers announce Journal Home partnership

COUNTER Commits to the Principles of Open Scholarly Infrastructure

Charlesworth Partners with APS to Offer WeChat Marketing Services in China

​Transform Your Society’s Online Presence with our INTEGRATOR​

Wiley and OA Switchboard partner to make open access data sharing easier

How Portico preserves the most at-risk content

How academic publishers profit from the publish-or-perish culture JB: This is the Financial Times’ take on the academic publishing sector, which is remarkably light on useful insight.

Springer Nature and Couperin sign new agreement to advance open access publishing in France

Thank you to our sponsor, Nicky Borowiec Design & Brand

I’m an experienced brand strategist and designer specialising in academic publishing, with a brand portfolio that includes Nature, the BMJ, and University of London Press.

I focus on maximizing the impact of brands: foundational work on mission and values; structuring and future-proofing portfolios in an intuitive, customer-focused way; and translating brand purpose into thoughtful, strategic visual identities and messaging.

Click here to view my work and to learn more about how I can help you to develop your brand strategy.

Alternatively, get in touch.

Opinion

Predicting retractions with the Papermill Alarm

Hindawi’s work to clean up the papermill problem in their journals is laudable. They showed integrity in admitting mistakes and rectifying them. They have made significant steps so far. It can’t have been fun or motivating work, but it was the responsible thing to do and that should be recognised.
At Clear Skies, what we want is to catch as many cases as possible prior to peer-review so that publishers can focus on quality service instead of having to constantly hunt for fraud. Hindawi’s retractions give us some interesting insights there.
The current version of the Papermill Alarm detects signals in 98.9% of the Hindawi retractions conducted over the last 12 months. It’s never nice to see the harms caused by papermills, but it is good to see independent verification of the Papermill Alarm’s predictions.

Medium (Adam Day)

JB: Adam answered the question that immediately came to mind as I read this post:

The Papermill Alarm actually learns from retractions automatically. So in order to get the above figures, I had to remove Hindawi’s retractions from the inputs to stop the Papermill Alarm from predicting something it had already seen.

Are commitments to open data policies worth the paper they are written on?

Why am I so hung up on data-sharing? The reason is simple. The more I share my own data, or use data shared by others, the more I appreciate the value of doing so. Errors are ubiquitous, even when researchers are careful, but we'll never know about them if data are locked away.
Furthermore, it is a sad reality that fraudulent papers are on the rise, and open data is one way of defending against them. It's not a perfect defence: people can invent raw data as well as summary data, but realistic data are not so easy to fake, and requiring open data would slow down the fraudsters and make them easier to catch.
Having said that, asking for data is not tantamount to accusing researchers of fraud: it should be accepted as normal scientific practice to make data available in order that others can check the reproducibility of findings. If someone treats such a request as an accusation, or deems it "unreasonable", then I'm afraid it just makes me suspicious.
And if organisations like Springer Nature and Max Planck Gesellschaft won't back up their policies with action, then I think they should delete them from their websites. They are presenting themselves as champions of open, reproducible science, while acting as defenders of non-transparent, secret practices. As we say in the UK, fine words butter no parsnips.

BishopBlog (Dorothy Bishop)

JB: The story behind this comment is interesting in and of itself. It was also covered in For Better Science. I hadn’t heard the parsnips maxim before.


Other opinion articles

Peer Review Week 2024: "Innovation and Technology in Peer Review"

Gaining and Measuring Article Attention

The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool

What is editorial-led peer review?


Webinars

If you want to learn about some of the core challenges facing the scholarly publishing community, this list of webinars could be of interest:

Scholarly Publishing Webinars

I’ll try to keep this Google Doc updated. Please help me by sending details of webinars that you’re hosting (just hit reply to this message).


And finally...

Scholarly publishers get a lot of traffic from search engines, which could be disrupted by AI. The Clarke & Esposito team covered this thorny problem in latest issue of The Brief. This assessment in The Guardian, suggests there’s still a way to go.

For those of us who watch the industry, therefore, the question became: how will Google respond to the threat?
Now we know: it’s something called AI overviews, in which an increasing number of search queries are initially answered by AI-generated responses. “Sometimes,” the company burbles, “you want a quick answer, but you don’t have time to piece together all the information you need. Search will do the work for you with AI overviews.” Or, to put it more succinctly: “Let Google do the searching for you.”
To date, some of this searching suggests subhuman capabilities, or perhaps just human-level gullibility. At any rate, users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day (presumably to combat iron deficiency). Memo to Google: do not train your AI on Reddit or the Onion.

Until next time,

James

113 Cherry St #92768, Seattle, WA 98104-2205
Unsubscribe · Preferences

Journalology

The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.

Read more from Journalology

Subscribe to newsletter Hello fellow journalologists, I’ve recently returned from my summer vacation; in this week’s newsletter I’ve tried to summarise the key news stories and opinion pieces from the past 3 weeks. If you’ve been away too, hopefully it will help you to catch up. The newsletter has two new sponsors: ChronosHub and Origin Editorial. I’m able to spend time creating this newsletter because of the sponsors’ financial support, for which I'm very grateful. Thank you to our sponsor,...

Subscribe to newsletter Hello fellow journalologists, Those of us living in the northern hemisphere are now firmly into the summer holiday season. You’re receiving this email earlier than usual because my family and I are about to head off on vacation. You won’t hear from me for a while. If you get Journalology withdrawal symptoms, you can always browse the archives. Thank you to our sponsor, Digital Science Writefull uses AI to automate publishers’ language-related tasks and make these...

Subscribe to newsletter Hello fellow journalologists, I started off last week’s newsletter with a comparison of the volume of research articles published in the first halves of 2023 and 2024. There were a couple of things that didn't feel quite right to me and with the help of a Journalology reader I dug a bit deeper. Last week I wrote: IEEE Access has dropped by 34% too, for reasons that I don’t understand. My former Nature colleague downloaded the raw data file and found that the...