Individuals who wish to identify potential problems in the scientific literature can either choose to report their grievances privately (with the expectation that the issue will be appropriately handled) or they can post their accusations publicly. Clearly there are many reasons for dealing with unproven and potentially damaging allegations privately, however a new study suggests that when this route is followed a much smaller percentage of the allegations result in a correction to the literature.
The study, published today in PeerJ, was conducted by Paul S. Brookes, an associate professor of anesthesiology at the University of Rochester Medical Center in upstate NY. Brookes examined the status of nearly 500 scientific articles which were submitted to an anonymous blog which he ran during 2012, devoted to highlighting potential problems in published life sciences articles. Some 274 of these papers were blogged about, describing their problems in detail. However, allegations on a further 223 papers never saw the light of day, due to the blog being shuttered by legal threats in early 2013. Comparing these two sets of papers for which concerns were voiced – i.e. 'public' and 'private' sets – revealed striking differences in their current status.
Despite all the problems having been reported to the journals in question, on average the publicly discussed papers were retracted or corrected 7-fold more than those for which the allegations were never publicized. This was despite similar properties between the paper sets, including the number of alleged problems per paper, the impact factor of the journals they were published in, and the number of lab' groups they originated from. Brookes says that "although a lot of people have assumed that shining more light leads to more action, no-one has actually tested this hypothesis".
In addition to more corrections and retractions, the blogged-about papers saw more combined action on the papers of particular laboratory groups. In other words, if a laboratory group had one paper with problems requiring action by a journal, this was associated with more actions on their other papers. Brookes suggests that editors may be more inclined to act on a paper if they see the sum-total of a particular lab group's problems, whereas an isolated paper may not be deemed important enough to act on, if corroborating evidence about other papers from the same group remains hidden.
Brookes was quick to highlight some important caveats to his study. First, the small sample size, focused mainly on image data in the life-sciences, makes it unclear if these findings are generalizable to the scientific literature at large. Second, due to the nature of the data collection, and the fact that the raw data set for the study is essentially a list of problems which could be interpreted as specific allegations of scientific misconduct, the study is unlikely to be repeated.
The study has some important implications for the burgeoning field of "post-publication peer review", which encompasses a number of initiatives, some of which allow their users to leave anonymous comments about any published paper. These efforts and a number of blogs on the subject have drawn criticism, but results such as those of Brookes' study suggest that these approaches can result in a greater rate of corrections to the scientific literature. Brookes described the current system for post-publication peer review as a work in progress, stating "there's a need for this type of discussion, but the jury is still out on exactly what the best system is, who should be allowed to comment, will they be afforded anonymity, and of course who will pay for and police all this activity".
PeerJ is an Open Access publisher of peer reviewed articles, which offers researchers a lifetime publication plan, for a single low price, providing them with the ability to openly publish all future articles for free. PeerJ is based in San Francisco, CA and London, UK and can be accessed at https://peerj.com/.
All works published in PeerJ are Open Access and published using a Creative Commons license (CC-BY 4.0). Everything is immediately available—to read, download, redistribute, include in databases and otherwise use—without cost to anyone, anywhere, subject only to the condition that the original authors and source are properly attributed.
PeerJ Media Resources (including logos) can be found at: https://peerj.com/about/press/
Note: If you would like to join the PeerJ Press Release list, visit: http://bit.ly/PressList
For the Authors: Paul Brookes, email@example.com
Abstract (from the article):
Several online forums exist to facilitate open and/or anonymous discussion of the peer-reviewed scientific literature. Data integrity is a common discussion topic, and it is widely assumed that publicity surrounding such matters will accelerate correction of the scientific record. This study aimed to test this assumption by examining a collection of 497 papers for which data integrity had been questioned either in public or in private. As such, the papers were divided into two sub-sets: a public set of 274 papers discussed online, and the remainder a private set of 223 papers not publicized. The sources of alleged data problems, as well as criteria for defining problem data, and communication of problems to journals and appropriate institutions, were similar between the sets. The number of laboratory groups represented in each set was also similar (75 in public, 62 in private), as was the number of problem papers per laboratory group (3.65 in public, 3.54 in private). Over a study period of 18 months, public papers were retracted 6.5-fold more, and corrected 7.7-fold more, than those in the private set. Parsing the results by laboratory group, 28 laboratory groups in the public set had papers which received corrective action, versus 6 laboratory groups in the private set. For those laboratory groups in the public set with corrected/retracted papers, the fraction of their papers acted on was 62% of those initially flagged, whereas in the private set this fraction was 27%. Such clustering of actions suggests a pattern in which correction/retraction of one paper from a group correlates with more corrections/retractions from the same group, with this pattern being stronger in the public set. It is therefore concluded that online discussion enhances levels of corrective action in the scientific literature. Nevertheless, anecdotal discussion reveals substantial room for improvement in handling of such matters.