Medical writer Ivan Oransky, co-founder of the blog Retraction Watch and Editor-in-Chief of the autism research news website Spectrum, speaks about the difficulties of assessing the quality of peer review and of retraction, the process of publication withdrawal of articles that display flawed or erroneous data. He offers advice for non-specialist readers of scientific literature.
What is the problem of predatory journals?
A lot of these large publishers like to talk about how awful predatory publishing is, yet at the same time some of their journals are having hundreds of papers retracted because they don’t stand up to scrutiny. I’m not quite sure how such papers are different from those published in the so-called ‘predatory’ journals.
People have tried to define what makes a journal ‘predatory’; does it charge too much, does it charge too little, does it look ‘professional’, does it follow certain standards…However, you can check all those boxes and still be a lousy journal.
The key question is: Is the peer review process robust? Does it actually improve manuscripts and lead to the rejection of terrible manuscripts? To me those are the only things that matter. I don’t care how much you charge, I don’t care what your website looks like (as long as it’s accessible), or how long you’ve been in business.
Unfortunately, a lot of journals that would never be labelled ‘predatory’ by the usual definitions are not doing a very good job of demonstrating that their peer review process is robust.
In my opinion, journalists should always be sceptical and not play into a false binary of whether the article is in a ‘predatory’ or ‘not predatory’ journal, or whether it has been peer-reviewed or not peer reviewed. Even within a peer-reviewed journal, not everything is peer reviewed to the same extent. They need to understand the way the system works.
It’s fine to say that an article hasn’t been peer reviewed at all: that is clear to people, and you see journalists stating this when referring to pre-prints. However, if it has been peer-reviewed, I’d want to know how well it has been peer reviewed. We can’t measure this at the moment, mostly because journals are ridiculously untransparent about their peer review process.
If journalists limited themselves to journals that publish their peer review reports, there would be a lot of slow news days as there are only a handful of them. I think that if enough journalists started asking to see the reviews of papers, particularly those of ones that end up being retracted or seriously questioned, we might start to see things change. Open peer review is something people have spoken about for a long time, and we are starting to see it happen.
Is the problem getting bigger?
Ivan Oransky: We are certainly becoming more aware of it because we have platforms such as Retraction Watch and PubPeer that allow people to discuss dubious papers. The rate of retraction has gone from about 40 papers a year in the year 2000, to 3300 papers being retracted in 2021. Bearing in mind that the number of papers published has also increased, I’d say there has been at least a 40-fold increase in the number of retracted papers in that period.
Has the problem become worse since the pandemic?
Ivan Oransky: Despite claims back in June 2020 that an alarming number of Covid-19 papers were being retracted (see Commentary by Yeo-Teh, Ling, and Tang), there is no evidence that the rate of retraction is higher than for other papers. Retractions take on average three years, so it is too early to do this kind of analysis.
To date [10th February 2022] there have been 209 retractions of Covid-19 papers, but over that time there have been around 5000 retractions overall. We might start to see some of the early cases of misconduct or fraud trickle in now.
The amount of time it takes to retract a paper clearly indicates that publishers are not prioritising this. We routinely request for correspondence between universities and journals just to see where the delay lies and, more often than not, the journals sit on the results of investigations by universities. By publishers delaying the retraction, the journal’s impact factor is less likely to take a hit.
Last year, Retraction Watch reported on a fake paper in which a researcher was a victim of false authorship. Why is this happening?
Ivan Oransky: It’s a mystery. Journals aren’t saying much about these cases. We’ve seen a fair amount of impersonation; people make up a name, use a ‘famous’ person’s name or add an affiliation to a university department to gain credibility. The need to publish and get your name on things is behind all of this. In some cases, the authors have come out after and admitted to it, stating the reasons why.
What should readers be aware of when they are reading scientific literature?
Ivan Oransky: Science is a human endeavour and scientists are under incredible pressure to publish. This may end up leading people, who are otherwise quite honest, to do things that are on the edge of honesty. It’s a messy world out there, and any headline that tries to distil a study or even a group of studies is not going to be 100% right.
As former president Ronald Regan said: “Trust, but verify”. We should also be aware of our own bias, especially when we read a study that appears to agree with what we think is true.