Failure to share materials and data undermines the scientific profession. Without doubt, science is one of humanity’s greatest inventions. While humans have always been curious, and our ancestors accumulated enormous amounts of useful lore over the ages, science as we know it is an invention of the modern era. As currently conceived, scientists put forth theories and hypotheses, secure data relevant to those hypotheses, and report their findings publicly. The public reporting is crucial; indeed, absent publication, science would not exist. Not only does this communication let others judge the quality of the work; but other scientists are then in a position either to confirm, disconfirm, or, as often happens, reconfigure the findings in a way that generates further study and, in the happy instance, deeper insights about how our various worlds—physical, natural, and social—work.
We are all aware of the examples of science gone flagrantly wrong: data that are deliberately misread, control conditions that are inappropriately administered or altogether omitted, or findings that are sheer fabrication. Recently, an enormous amount of attention has been devoted, and properly so, to findings in psychology that are headline news, only to be disconfirmed by a new set of studies. Every time such malfeasance or compromise of work occurs, the credibility of science is further undermined.
Recently, from two colleagues, Ellen Winner and Sam Mehr, I’ve learned of another move that threatens to undermine the routine practice of science. When researchers submit articles to reputable journals, they commit to sharing their data and stimuli with other trained researchers; and the journal commits to aiding this process. These commitments are important. In many cases, it is simply not possible to confirm or disconfirm findings in the absence of access to the actual stimuli that were used, or to the precise data that were collected.
In a case reported by Winner and Mehr, researchers sought to replicate findings from a widely publicized study which claimed that musical performances are evaluated more by visual information than by auditory information. In other words, informants pay more attention to how the performer looks than to how his or her playing sounds. When an attempt at replication failed, Winner and Mehr sought to secure the materials used in the original study. The original researcher refused to provide these materials. Winner and Mehr then enlisted the journal editors’ help; the editors were sympathetic and said they would help but eventually stopped replying to emails. The materials were never secured. In another similar case, one of these researchers sought a test developed for a study of music processing. The author of the test stonewalled; and the journal editor apologetically stated, “Well, we cannot police such things.” Only weeks of non-stop protesting induced the recalcitrant creator to provide the test.
This failure to cooperate has two unfortunate consequences. First, we cannot know whether widely reported findings are in fact valid. Second, when word of non-cooperation or extremely reluctant cooperation spreads, doubts are sowed about the validity of scientific practices more generally – and it is hard to know how to defend the profession.
While there is no literal Hippocratic Oath for science, anyone who calls herself a scientist should understand the obligations inherent in membership in the profession. Until now, most scientists and editors have simply swept these ‘failures to respond’ under the rug. While I would not call for the construction of public stockades dedicated to the pummeling of non-sharing scientists, I propose making their names and misdeeds public. And if they continue their non-cooperation, they should no longer be allowed to publish in respectable outlets.