by Michael D. Anestis, Ph.D.
As is so often the case, the listserv for the Society for a Science of Clinical Psychology (SSCP) recently brought my attention to an interesting and highly important story: the saga of attempts by researchers to publish replication studies (particularly when such studies do not report results consistent with the original study).
To explain why I think this issue is both important and interesting, let me start by explaining what I mean by replication and then explain why it is so pivotal in science. Any time a scientist conducts a study, analyzes his or her data, and publishes the results, there is always the possibility that the findings were due purely to chance. Generally speaking, good hypotheses are driven by theory and build off of prior findings, which lowers (although does not eliminate) our concern that a result might not be real. Every now and then, however, a researcher publishes a controversial result and concerns that the results are not real skyrocket. The best way to address those concerns is replication. Rather than simply debating the potential truth of a claim, scientists test it by following the same procedures of the original researcher and attempting to replicate the results. The more often a result can be replicated independently (in the absence of repeated failures to replicate) the more confident we become that the original effect was real. By doing this, we don't "prove" anything to be true, but we increase our confidence that we are thinking about things in a way that accurately reflects reality. This is a big big deal.
Now...the fact that something is a big deal does not make it popular. As it turns out, it is extremely difficult to publish pure replications. If a researcher replicates an effect in the process of also testing something else or adding another dimension to the study, the process becomes easier; however, if they deviate from the original plan that way, it also becomes easier for the original researcher to shrug off failures to replicate by noting that the new experiment did something different. So...in order to remain employed, gain tenure, and thrive professionally, researchers need to continue to publish regularly, but in order to replicate controversial findings and further our understanding of the validity of those effects, researchers would have to invest their time and efforts into a process unlikely to yield results that help them professionally. That's not ideal.
The reason this came up on SSCP goes back to a story last year, when Dr. Daryl Bem of Cornell University published a series of nine studies that he claimed were supportive of precognition in the highly prestigious Journal of Personality and Social Psychology. Needless to say, these results were met with substantial skepticism and, accordingly, some researchers were willing to take on the difficult and potentially not so rewarding task of trying to replicate the results. As it turns out, multiple researchers have now successfully published failures to replicate Bem's original findings, which is unsurprising to those of us who found the original results counter-intuitive, and highly important in that it put our skepticism to the empirical test. What's really interesting, however, is how difficult it was for researchers to get these important results through the filter of the publication process.
To help illustrate what the process was like, I'm including a link to an article by Chris French of the University of London, who was the lead author of one of the now published failed replication efforts. The article is a really interesting account of what it was like to get these results where they are now and a commentary on the role of replication in science:
For those of you interested in reading the results themselves, you can read them here (they are published in PLoS ONE, a high impact, open access journal).
I've seen a number of potential solutions offered as ways to overcome the obstacles associated with publishing replications. That being said, what do you think would be the best answer? Do we even need one?
Dr.Anestis is an incoming assistant professor in the Department of Psychology at the University of Southern Mississippi