by Michael D. Anestis, M.S.
Today I find myself a part of a familiar scene. I'm sitting at my desk, looking at my computer with half of the screen showing a Firefox browser in which I am typing a PBB post and half of the screen opened to a PDF of a published journal article discussing the dodo bird hypothesis. For regular readers, this might leave you with flashbacks to the movie "Groundhog Day," minus the comedic genius of Bill Murray (although, admittedly, this is not how I would hope somebody would describe PBB). Why the recurring theme? Well, as I have said before, my frequent commentaries on this issue are due to the persistence of those who believe in the faulty conclusion that all psychological treatments are equal for all disorders. With all of the misinformation out there, I think it is important to call attention to the facts every time distortions receive publicity. This is, after all, an incredibly important question. If all treatments work the same for all conditions, this means that a remarkably large number of helpful resources are available in any given community for individuals suffering from mental illnesses and nobody needs to put much effort into determining which is the best available option. If, on the other hand, particular treatments are more effective than others for particular diagnoses, this means that it is important that we learn to perfect new treatments and that consumers become educated regarding which treatments are most likely to help them so that they can successfully navigate a marketplace flooded with inferior options. We have covered this a number of times in the past, but as a freakishly quick recap, here's what we know: the dodo bird hypothesis is wrong and all treatments are not the same for all disorders.
If we know this, however, why does the belief persist? There are a number of answers for this. For one thing, some highly educated individuals have run a number of remarkably flawed studies on the matter that have resulted in misleading findings. Because so few of us are trained to look at data and understand what a particular study really tells us, we hear their conclusions and, because we respect the authors' credentials, assume that they are speaking truthfully. Additionally, most of us who are trained in data analysis and who understand the actual answers to this question do very little to make our voices heard and, as a result, misinformation is able to persist. This is where we hope PBB comes in, as we aim to provide you with descriptions of what we really know, based upon scientific studies.
Fortunately, my job was made a little bit easier than normal today. Jed Siev, whose work we have covered on PBB on a number of occasions, emailed me last night and called my attention to an article just published in Clinical Psychology Review by Anke Ehlers and a number of colleagues in the United Kingdom, Australia, and the United States (2010). In this study, the authors examined a recent meta-analysis by a number of prominent dodo birders (Benish, Imel, & Wampold, 2008) that claimed that all treatments for PTSD work equally well.
Ehlers and colleages (2010) opened their article by pointing out that a number of meta-analyses have demonstrated that trauma-focused psychological treatments - that is, treatments that devote much of their focus to discussing, in detail, the nature of the traumatic event - produce superior results in the treatment of PTSD (e.g., Australian Centre for Posttraumatic Mental Health, 2007; Bisson & Andrew, 2009; Bisson et al., 2007; Bradley, Greene, Russ, Dutra, & Westen, 2005; Cloitre, 2009; Seidler, & Wagner, 2006; van Etten & Taylor, 1998). Trauma focused psychological treatments can involve various forms of cognitive behavioral therapy (e.g., prolonged exposure) as well as eye movement desensitization and reprocessing (EMDR; click here for access to articles we have written on the degree to which EMDR is actually any different from CBT). Further meta-analyses have shown that, within this group of treatments - those that focus on the trauma itself - results tend to be equal, meaning that no trauma-focused approach is better than any other (e.g., Bisson & Andrew, 2009). So, the point Ehlers and colleagues (2010) were making here is that there is a substantial history of research demonstrating that one particular class of treatments is particularly effective in treating PTSD, but that within this class, the treatments are equal to one another.
Despite this large evidence base, Benish and colleagues (2008) concluded on the basis of their own meta-analysis (as they have done in response to all of the meta-analyses they have run over the past couple of decades on various psychological treatments) that all "bona fide" PTSD treatments are equal. Benish and colleagues defined a particular treatment as "bona fide" is they determined that it was "intended to be therapeutic." This, of course, is not based upon any empirical standard or treatment technique, but rather on the opinion of the Benish and colleagues.
The goal of Ehlers et al (2010) was to address the contradicting results of the Benish et al (2008) study relative to prior work on PTSD. In doing this, they made several points, each of which I will explain in detail. Essentially, they had two main critiques:
- The manner in which Benish and colleagues (2008) selected studies to include in their analysis was remarkably biased and completely skewed the results
- The analyses used by Benish and colleagues (2008) failed to demonstrate that certain "bona fide" treatments were more effective than natural recovery. In other words, several studies failed to show that the treatments themselves rather than the mere passage of time resulted in symptom remission
Let's take a look at what Ehlers and colleagues (2010) meant when they made each of these critiques:
Bias in selection
In any meta-analysis, the authors have to decide which studies on a topic to include and which to keep out. There is no clear cut rule on how to do this and, depending on which studies you select, you are likely to get different results. Now, your instinct might be to say "just select the best ones," and that sounds great, except people interpret the meaning of the word "best" very differently. This, of course, is a complaint I have made about meta-analyses on PBB many times before, but it bears repeating every time it is relevant.
In a review published by Cloitre in 2009, 44 comparisons of different forms of PTSD treatment from 27 studies were included. Benish and colleagues (2008), on the other hand, claimed to have only found 26 comparisons in 22 studies and ultimately included only 17 comparisons from 15 studies in their analyses.
You might ask, "how did this happen?" Ehlers and colleagues (2010) pointed out that Benish and colleagues (2008) did not include most trials involving supportive (e.g, non-directive, Rogerian, person-centered) psychotherapy. Their rationale for this is that they did not believe supportive psychotherapy was "intended to be therapeutic" and, as such, was not bona fide. Given than this is the most commonly used form of treatment for PTSD in the British National Health Service (Ehlers, Gene-Cos, & Perrin, 2009) and that it is commonly used in the US as well - Pingitore, Scheffler, Haley, Sentell, & Schwalm (2001) found that 58% of practicing psychologists in California use this approach - it seems a bit strange to assume that individuals utilizing this form of treatment are not intending to perform a therapeutic function. Additionally, given that social support has been linked to PTSD recovery (Ozer, Best, Lipzey, & Weiss, 2003), it is difficult to argue that clinicians have no reason to believe that supportive psychotherapy has a chance to be helpful. Perhaps even more importantly, however, given that one of Bruce Wampold's strongest claims is that common factors - and particularly the therapeutic alliance - are the driving force of success in psychotherapy, it seems counter-intuitive for he and his colleagues to eliminate treatments that base their entire focus on developing a strong alliance between client and therapist. Taking this a step further, Benish and colleagues (2008) chose to eliminate all trials of supportive psychotherapy even if they outperformed a no treatment control condition. In other words, whether or not it impacts the client does not determine if a treatment is "bona fide." All that matters is that Benish and colleagues (2008) feel that it is. In making this decision, they took out a large number of treatments that could be compared to trauma-focused psychological interventions, thereby increasing the chances that they would be able to support their point, that all treatments are equal (this becomes easier to do when you exclude any treatment that is not equal). Along these lines, Benish and colleagues (2008) included two studies of present centered therapy (PCT) but did not include two supportive psychotherapy trials in which the same techniques were used. It is not at all clear why they made this decision.
The two studies that Benish and colleagues (2008) included examined the use of PCT and trauma-focused CBT in clients with multiple, prolonged traumas (McDonagh et al., 2005; Schnurr et al., 2003). In one of these studies, CBT was used in group format despite the fact that no empirical evidence exists suggesting that this is a valid way to deliver the treatment. In each of these studies, Benish and colleagues (2008) concluded that the two treatments were equal despite the fact that, in one trial, CBT outperformed PCT in reducing avoidance, numbing, and possibly overall PTSD symptoms for individuals who received a full dose of treatment and, in the other trial, CBT outperformed PCT in the proportion of clients who, at 3-month follow-up, no longer met criteria for PTSD (82% for CBT vs 42% for PCT). Regardless, Ehlers and colleagues (2010) pointed out that even if we ignore these differences, we are left with two possibilities: PCT and trauma-focused CBT are equal to one another or the difficult-to-treat client base obscured the results. Results of two recent trials help to clarify this. In Schnurr et al (2007), which Benish and colleagues (2008) chose not to include in their analyses, prolonged exposure outperformed PCT in the treatment of female veterans with PTSD. In Ehlers et al (in preparation), trauma-focused CBT again outperformed PCT. Given these recent findings, and the lack of findings with results indicating PCT ever outperforms CBT, you really have to start to wonder.
Do treatments benefit patients more than the simple passage of time?
The point made here by Ehlers and colleagues (2010) is that Benish and colleagues (2010) did not take nearly enough care to ensure that what they were measuring was actually doing what they claimed. Of all the treatment comparisons they looked at, only 6 involved treatments other than trauma-focused CBT and EMDR. On the basis of a single small study published in 1989, Benish and colleagues (2008) concluded that psychodynamic therapy and hypnotherapy are just as effective as trauma-focused psychological interventions for PTSD. The thing is, neither hynotherapy nor psychodynamic therapy was consistently more effective than a no treatment waitlist control condition in that study and there were a number of important symptoms that did not improve at all in clients who received these treatments. In other words, simply waiting around and not getting treatment resulted in greater improvement than did receiving psychodynamic therapy or hypnotherapy. That's not impressive. This, of course, did not stop Benish and colleagues from concluding that all treatments for PTSD are equal to one another.
Benish and colleagues (2010) pointed to the fact that an early version of exposure therapy - a form of trauma-focused CBT - was used in that trial as well and did not perform very well (although it outperformed the other approaches). The problem with their point there is that that particular form of exposure - trauma desensitization - is no longer widely used. Scientists recognized its deficiencies, adapted, and developed more effective forms of treatment that routinely result in greater treatment effects. As they tend to do in their analyses, however, these folks overlooked that fact, collapsed treatments together arbitrarily, and found null findings that they mistakenly believed represented equivalence across treatments. This of course, is akin to comparing your first draft (and ignoring subsequent drafts) of a paper to a classmate's final draft, developed after several rounds of careful revisions, and then drawing conclusions about the degree to which each of you attend to details. I suspect you would be unlikely to do that unless you were intentionally trying to support the notion that you are inferior.
So what should we do?
I love the comments that Ehlers and colleagues (2010) used to open this section of the paper, so I will simply present them here (p.273)
"Meta-analyses oversimplify maters. They require arbitrary decisions about categories and inclusions and exclusions, and these decisions influence the results."
Ehlers and colleagues (2010) see things in a manner very similar to me on this point. They believe that dodo birders have made untenable decisions in choosing which studies to include and exclude in their meta-analyses, and as a result, they have painted a picture that has absolutely nothing to do with reality. Because few people ever actually look at the studies chosen and excluded by the authors of meta-analyses, the conclusions they make are simply taken at face value, regardless of whether or not they have any connection to the truth. This, of course, is true of meta-analyses that support empirically supported treatments as well, so the answer is not to simply discard the ones we dislike, but rather to focus our attention on the results of direct comparisons of treatments to one another in adequately powered samples so that we can no manipulate the data to simply tell the story we want to hear.
Ehlers and colleagues offered up several ideas for how to improve this situation. First, authors of meta-analyses need to be more transparent regarding which studies they included and excluded. By listing all members of each category, they make it easier for readers to see what's going on and to determine which studies simply went undiscovered by the authors. Second, they suggested that only trials that adequately measure the degree to which treatments are implemented according to their protocol be included. Rather than excluding studies on the basis of arbitrary decisions regarding whether or not they are "bona fide," a category made-up by Wampold and colleagues, empirical data regarding the degree to which treatments were effectively implemented and outperformed a no-treatment control condition should be a determining factor regarding whether or not they are included in an analysis.
The basic point of the Ehlers et al (2010) study was to refute the notion that all treatments for PTSD are equal to one another. They did this by examining a meta-analysis by Benish and colleagues (2008) and demonstrating some important flaws in their approach. These flaws are very similar to the flaws from earlier Wampold work as well as the recent study by Shedler on psychodynamic therapy. Because issues like this keep coming up, the value of properly run meta-analyses versus the danger of poorly run meta-analyses is becoming an important issue to consider. The current practices are leading to the proliferation of flawed conclusions based upon sloppy data.
To wrap this article up, I want to demonstrate the degree to which certain meta-analyses stray from the truth by summarizing a fantastic table included in the Ehlers et al (2010) study. In this table, the authors demonstrated that Benish and colleagues (2008) have a habit of including small components of quotes from earlier studies that do not reflect the actual statement being made by the original authors.
Benish and colleagues (2008) quoted McDonagh and colleagues (2005) by saying "treatments did not differ significant at any assessment time point on any measure" and "no significant differences." Sounds damning, huh? I mean, no treatments were different from one another at all. Let's see what the entire quotes said though:
"Our hypothesis that CBT would be superior to PCT in promoting recovery received support from the finding that CBT was superior to PCT in achieving remission from the PTSD diagnosis at follow-up...the two active treatments did not differ significant at any assessment time point on any other measure."
"CBT participants were significantly more likely than PCT participants to no longer meet criteria for a PTSD diagnosis at follow-up assessments."
Those two quotes from the actual study seem to be at odds with the partial quotes that Benish and colleagues (2008) used, huh? Indeed they are. This technique - quoting a person out of context or only including parts of quotes, is a common method of manipulating a situation to make it seem more supportive of our viewpoint than it actually is. It happens in politics, in middle school cafeterias, and professional wrestling to great effect, but it has no place in science.
Sometimes the results of scientific studies do not turn out as we expected and we are forced to change the way we think about things. Other times, however, surprising results are due to poorly run studies. I can't stress this point enough: before you take anyone's word for something, ask how they came to their conclusions and look to see if anyone else has provided a counter-argument exposing their errors. In the case of the dodo birders, highly accomplished individuals are regularly publishing conclusions that simply are not supported by the facts and, unfortunately, their errors have the potential to keep people from seeking the right form of therapy for their particular condition..
If you would like to learn more about PTSD and its treatment, we recommend the following items, each of which is available through our online store of scientifically-based psychological resources:
- Prolonged Exposure Therapy for PTSD: Emotional Processing of Traumatic Experiences Therapist Guide by Edna Foa, Elizabeth Hembree, and Barbara Rothbaum
- Reclaiming Your Life from a Traumatic Experience: A Prolonged Exposure Treatment Program Workbook by Barbara Rothbaum, Edna Foa, and Elizabeth Hembree
- Prolonged Exposure Therapy for Adolescents with PTSD Emotional Processing of Traumatic Experiences, Therapist Guide by Edna Foa, Kelly Chrestman, and Eva Gilboa-Schechtman