Typically, the time between a research paper being submitted to an academic journal and it being published is 3-4 months. However, during the COVID-19 pandemic, many researchers have turned to preprint servers to make their work publicly available immediately, before peer review.
Despite the increase in early-career biomedical researchers supporting the use of preprint servers before the pandemic to gain timely recognition of their work, this field has been particularly slow in embracing these publications. Unlike physics, only a minority of all articles in life sciences and medicine are preprinted. This reluctance can be explained, at least in part, by the potential of flawed research leading to changes in clinical practice that could harm patients.
Ludo Waltman, Professor of Quantitative Science Studies at Leiden University, has been analyzing the publication process of COVID-19 research papers during the pandemic: “Readers may not understand the status of these preprints and the difference between peer-reviewed and non-peer-reviewed work. This could lead to misinterpretations that might have harmful consequences.” – Read the full interview
Since January 2020, around 30,000 preprints on COVID-19 have been published. Overall, only around 20%, of these preprints have been converted into peer-reviewed articles. This is mainly due to the time it takes to carry out peer review. The percentage goes up to about 50% when looking only at preprints that appeared in January-February.
Ludo Waltman is trying to understand the reasons behind these figures : “Some of these preprints have major quality problems so it is difficult to find a journal that is willing to publish them. … . In some cases, it may be the authors’ deliberate choice to make their work available as soon as possible through a preprint. They may not be interested in going through the process of publishing in a peer reviewed journal.”
Publishers have embarked in various initiatives to speed up the peer review process. These include the COVID-19 rapid review initiative, which allows the transfer of referee reports and articles between journals owned by different publishers, and developing online platforms that enable researchers to review each other’s work openly, such as Rapid Reviews: COVID-19 by MIT press.
Some publishers have also been able to invest in staff and/or artificial intelligence (AI) functionality to accelerate peer review and increase their output. This may not necessarily be beneficial.
Adam Marcus, co-founder of Retraction Watch, a blog that reports and tracks retracted papers: “Given that the number of published papers that get read by more than a few people is very small, the solution is not to publish more papers, but better ones” – Read the full interview
Peer review isn’t perfect. It can fail to identify weaknesses or in some cases major flaws in a paper, resulting in a retraction. To date, Retraction Watch lists contains 37 retracted COVID-19 papers, including a couple in very prominent journals on the safety of malaria drugs and blood pressure medications for treatingCOVID-19 patients.
“There are lots of little changes we can make to raise the overall quality of the review system and of the outputs,” says Marcus. He thinks that journals should be more diligent in checking reviewer suggestions made by authors (to make sure they’re real people and/or don’t have conflicts of interest with the authors themselves), and in detecting image manipulations or flawed statistical methods before papers are published rather than after.
According to Waltman, COVID-19 is changing the way we assess scientific literature: “The pandemic is forcing everyone, … researchers, journalists, policymakers to … gradually move away from the dichotomy between research in journals, that has been peer reviewed and is supposed to be reliable, and other types of work that don’t have this ‘stamp of approval’.”
Rather, readers face a continuum: from really low-quality research that hasn’t gone through any form of quality control at one extreme, to work that has been through a thorough peer review at the other, with research in journals that carry out a more superficial peer review and preprints that have benefitted from a certain level of quality control and expert feedback in between.
“We are still in the process of developing clear markers that inform researchers (and the public at large) about the level of trust you can have on a research output. We need to find a language that is easily understood, by doctors, journalists, policymakers…, and conveys different levels of soundness or trust you can have in the findings,” he explains.
“Further understanding of how research is organised and published will inform better ways of disseminating the information to different sectors of society,” says Waltman. And, as Marcus points out, electing politicians who value science and expertise is crucial for putting the emerging knowledge about COVID-19 into practice effectively.