What a massive database of retracted papers reveals about science publishing’s ‘death penalty’

Read the full story in Science.

Nearly a decade ago, headlines highlighted a disturbing trend in science: The number of articles retracted by journals had increased 10-fold during the previous 10 years. Fraud accounted for some 60% of those retractions; one offender, anesthesiologist Joachim Boldt, had racked up almost 90 retractions after investigators concluded he had fabricated data and committed other ethical violations. Boldt may have even harmed patients by encouraging the adoption of an unproven surgical treatment. Science, it seemed, faced a mushrooming crisis.

The alarming news came with some caveats. Although statistics were sketchy, retractions appeared to be relatively rare, involving only about two of every 10,000 papers. Sometimes the reason for the withdrawal was honest error, not deliberate fraud. And whether suspect papers were becoming more common—or journals were just getting better at recognizing and reporting them—wasn’t clear.

Still, the surge in retractions led many observers to call on publishers, editors, and other gatekeepers to make greater efforts to stamp out bad science. The attention also helped catalyze an effort by two longtime health journalists—Ivan Oransky and Adam Marcus, who founded the blog Retraction Watch, based in New York City—to get more insight into just how many scientific papers were being withdrawn, and why. They began to assemble a list of retractions.

That list, formally released to the public this week as a searchable database, is now the largest and most comprehensive of its kind. It includes more than 18,000 retracted papers and conference abstracts dating back to the 1970s (and even one paper from 1756 involving Benjamin Franklin). It is not a perfect window into the world of retractions. Not all publishers, for instance, publicize or clearly label papers they have retracted, or explain why they did so. And determining which author is responsible for a paper’s fatal flaws can be difficult.

Still, the data trove has enabled Science, working with Retraction Watch, to gain unusual insight into one of scientific publishing’s most consequential but shrouded practices. Our analysis of about 10,500 retracted journal articles shows the number of retractions has continued to grow, but it also challenges some worrying perceptions that continue today. The rise of retractions seems to reflect not so much an epidemic of fraud as a community trying to police itself.

 

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.