Data Reproducibility: The Chink in Science’s Armor

//Data Reproducibility: The Chink in Science’s Armor

Data Reproducibility: The Chink in Science’s Armor

2017-05-14T00:49:09-07:00 May 7th, 2015|Technology|

By Christopher Fiscus, Biotechnology, 2015

Science is an additive discipline in which each novel contribution builds upon the breadth of existing scientific knowledge and acts as a launch pad from which to pursue further study.  The scientific community is currently in the midst of a crisis: many studies are not reproducible, meaning that results cannot be adequately verified by other scientists.  According to estimates, approximately 75-90% of preclinical studies published in high-impact journals, such as Science and Nature, cannot be replicated (Begley and Ioannidis 2015).  This lack of reproducibility undermines science as a vehicle for human progress as it means that new research avenues are being pursued based on presumptive hypotheses and unverifiable findings.  The result is a widespread waste of resources, a loss of public trust in the scientific establishment, and a reduced applicability of science as a tool to better the quality of human life.  Potential solutions to this crisis include improving researcher training, employing more rigorous peer review, and increasing the transparency of scientific literature.     

The most notable cause of the data reproducibility crisis is the use of poor scientific practices (Begley and Ioannidis 2015; Yarborough 2014).  As there are more professional scientists than ever before competing for their share of ever-shrinking funding budgets, there have been increasing pressures in the scientific realm to publish novel results as quickly as possible in an effort to establish credibility, obtain fame, and solicit increased funding.  This rush to publish has led many researchers to use questionable ethics and research practices, which, in turn, have produced haphazard results.  Examples of these practices include falsifying data, poor experimental design, improper or omitted controls, inadequate blinding, and the misuse of statistics (Landis et al. 2012; Nuzzo 2014).  Since new scientific endeavors are based upon previous work, the net result of these practices is a gross waste of time and money allocated to projects that are partially or wholly unfounded.  This waste breeds the public’s wariness and distrust of science, which can affect scientific funding and the rate of scientific progress.  Therefore, it is the duty of the scientific community to earn and maintain the public’s trust by conducting research in a manner that minimizes the waste of time and money and produces reliable data (Yarborough 2014).

The lack of transparency in the reporting of research methodologies in published research also harms data reproducibility.  It is extremely difficult, if not impossible, for a fellow scientist to completely understand or reproduce the findings of a study when it is unclear what work was done to establish a particular conclusion (Ryan 2011).  According to Landis et al. (2012), there seems to be a correlation in the scientific literature between poor reporting of experimental methods and poor experimental design, especially when the experiments utilize animal subjects.  This trend has gone largely ignored in scientific manuscripts, even in those that have undergone the process of peer review.  However, it is not always the case that poor reporting indicates poor science.  In some disciplines, such as field biology, it is customary for observational methods to be vaguely described because of the innumerable variables present in nature.  Consequently, the presumed quality of the work is dependent on the reputation of the investigator (Ryan 2011).  Nevertheless, poor reporting of data and methods harms the potential of the scientific community to reproduce a study and verify its data, meaning that the validity of the study cannot be adequately evaluated.

Improving the reproducibility of scientific work is challenging, but not impossible.  Foremost, scientists need to be better trained to ensure adherence to good research practices, to employ sound experimental designs, and to value research ethics.  To meet this need, universities and other research institutions need to provide better training and increase the emphasis placed on these techniques.  As science is a self-assessing discipline, peer review should be more stringent to ensure that the data and methods included in a manuscript are adequately described and therefore possibly reproducible.  To this end, alternative peer review systems such as open peer review, in which the reviewers are no longer confined to those elected by the editor of the journal, have been established as an alternative to traditional peer review.  This has the effect of increasing the number of reviewers on a pre-print manuscript, which should result in a better quality publication.

Additionally, to solve the data reproducibility crisis, the transparency of scientific publications should be increased.  Some journals, such as Public Library of Science (PLoS) One, require that the full dataset and methods used to derive the findings of a paper be made freely available to the public upon publication.  However, these requirements are not yet standard practice in the publishing industry and are noticeably absent from many high-impact journals.  Sometimes, especially in highly competitive fields or when doctor-patient confidentiality is an issue, data is intentionally omitted or is only vaguely described in published research (Ioannidis and Khoury 2011).  Often this data can be obtained by request, but to label this as full transparency is debatable since there are still barriers to accessibility.  The sharing of these data, when ethical, serves to advance scientific progress and ensure the quality of scientific studies.

Some publishers have taken this transparency a step further.  Following the open access movement, publishers such as the PLoS and BioMed Central have made their entire catalog of scientific publications available on the internet for free with the philosophy that publicly funded scientific research should no longer be restricted to the bounds of a paywall and its subscribers.  It is the hope that the open access movement will increase the reproducibility of data and the accessibility of scientific information for the greater good.

It is the duty of scientists to maintain the public’s trust, ensure that research is conducted ethically and efficiently, and ensure that science continues to advance mankind in its endeavors.  It is important to note that the data reproducibility crisis is not a reflection of a failed scientific method; rather, it is the result of the tendency of researchers to neglect scientific rigor in the process of professionally practicing science (Begley and Ioannidis 2015).  Therefore, science is not doomed.  There is hope that the data reproducibility crisis can be thwarted and that science can return to its role as a self-correcting discipline that strives to satisfy curiosity and improve the quality of human life.

 

References Cited

Begley, C. Glen, and John P.A. Ioannidis. “Reproducibility in Science: Improving the Standard for Basic and Preclinical Research.” Circulation Research 2 Jan. 2015: 116-26.

Landis, Story, et al. “A Call for Transparent Reporting to Optimize the Predictive Value of Preclinical Research.” Nature 11 Oct. 2012: 187-91. Print.

Ioannidis, John P.A., and Muin J. Khoury. “Improving Validation Practices in “Omics” Research.” Science 334 (2011): 1230-232. Print.

Nuzzo, Regina. “Statistical Errors.” Nature 506 (2014): 150-52. Print.

Ryan, Michael. “Replication in Field Biology: The Case of the Frog-eating Bat.” Science 2 Dec. 2011: 1229-230. Print.

Yarborough, Mark. “Taking Steps to Increase the Trustworthiness of Scientific Research.” The FASEB Journal 28 (2014): 3841-846. Print.