by Michael D. Anestis, Ph.D.
Over the past few years, I have weighed in on a number of discussions regarding the evidence-base for long-term psychodynamic psychotherapy (LTPP) as a treatment for various forms of mental illness. My posts on these topics have generally been in response to highly publicized publications reporting remarkably strong effects for LTPP across diagnostic categories - effects the authors typically report are stronger than those of short-term treatments in general and empirically-supported treatments (ESTs) in particular. Importantly, I am not the only one who has expressed substantial concerns with these publications. In fact, to those who use empirical evidence as the determining factor as to whether or not a study makes a valued contribution to science, a number of studies on this topic have been largely discredited (more on this later). That being said, it is extremely important for these debates to take place across a number of forums so that more people - whether they are psychologists, aspiring grad students, or consmers of mental health care - can make informed decisions about what is their clearest path towards a desired outcome.
Recently, PBB guest author Jim Coyne wrote a post on his Psychology Today blog critiquing a new study by Falk Leichsenring and Sven Rabung (2011) that was recently published in the British Journal of Psychiatry and which claimed to demonstrate superior effects for LTPP relative to short-term psychotherapy. I would encourage you to read Dr.Coyne's piece for a thorough description of the shortcomings of this particular study. That being said, the general crux of Dr.Coyne's concerns is that the data utilized by Leichsenring and Rabung are not capable of adequately answering the relevant questions and that the data they utilize do not actually depict the narrative described in their conclusions and subsequent publicity.
Having read Dr.Coyne's piece, Dr.Jared DeFife felt compelled to write a response on his own Psychology Today blog expressing his disagreement with Dr. Coyne. My goal today is to respond to Dr.DeFife's post, expressing my disagreement on a number of points. As always, I want to be clear in pointing out that my goal is not to attack the writer as a person or to claim that psychodynamic psychotherapy is a failure, but rather to discuss whether or not the logic and empirical evidence being used to buffer an argument seem reasonable and consistent with stated conclusions. As you might guess at this point, I do not think this is the case. Before reading my critique, however, I would highly encourage you to read Dr.DeFife's blog for yourself and draw your own conclusions so that you can read my words with a critical and informed eye. That is, afterall, the goal of science: competing hypotheses and intelligent conversations leading to clearer understandings (which, in time, will likely be overturned or expanded upon by further knowledge).
Early in Dr.DeFife's post, he wrote a section entitled "psychotherapy isn't a lab experiment." Before discussing the three points he included under this title, it is worth noting an important retort: no medical procedures or treatments are lab experiments. Like psychotherapy, they are interventions aimed at improving the lives of real people, many of whom are struggling with life-threatening conditions. That being said, given that the stakes are so high, understanding the degree to which those interventions actually produce results that justify their cost and continued use in a marketplace of competing options seems vital. Like medical procedures, things can happen during the course of psychotherapy that are difficult to capture in an experiment; however, like medical procedures, the well-being of patients is best safe-guarded through careful analyses of the degree to which psychotherapy matches the expectations of those who espouse a particular approach. We don't question the need or possibility to study medical techniques and psychotherapy does not enjoy a special status that earns it different treatment in this regard.
Under this title, Dr.DeFife included three main points: (1) psychotherapy is not a standardized system implemented in a rigid manner, thereby making studying it more difficult (2) true clinical trials are "double-blind" and psychotherapy isn't, thereby diminishing the information we can gain from empirical trials and (3) therapy takes significant time and studying something complex that long is difficult and expensive. Each of these points seems to me to involve important problems.
With respect to the first point, there is, in fact, significant evidence that the use of treatment manuals (which, although serving as guidelines, do not require a therapist to administer treatment in a robotic manner lacking in empathy) improves outcomes and that the degree to which treatment is administered in a manner consistent with instructions is an important influence on the degree to which the treatment will prove effective for that client. Paul Meehl would tell us that there is nothing wrong with improv in the lack of evidence and that, as such, when events take place that are difficult to predict (whether in psychotherapy, surgery, or any other medical procedure) it is the responsibility of the clinician to deviate from the anticipated course, but then follow-up any such deviation with empirical investigation. In other words, yes, treatment is not always delivered in the exact same manner across clients, but that can be controlled for in any experiment and thus serve as a variable that is informative in terms of understanding results.
With respect to the second point, I'm not entirely sure I understand Dr.DeFife's idea. I agree that psychotherapy is not a double-blind procedure, as at least the clinician knows what treatment is being delivered; however, he notes at the end of that paragraph that just because we don't have double-blind data on seat belt efficacy does not mean we do not have meaningful data. I would agree with that point - that imperfect psychotherapy data is highly valuable. My concern is that this point might be being used to justify faith in all flawed data rather than considering each result within the context of its limitations and offering more confidence to results based upon data of greater quality. In other words, if the evidence-base for one treatment is more flawed than the evidence-base for another treatment, saying all data is flawed and therefore equal seems to be a gross misrepresentation of reality. If this is not Dr.DeFife's point, however, than obviously this is a moot point.
With respect to the third point - that studying psychotherapy is difficult and expensive due to time requirements - I agree, but my response is: so what? These are important questions that require careful answers. If somebody wants their treatment approach to be considered legitimate as a response to life-threatening conditions, they need to provide data capable of justifying that outcome. Psychodynamic therapy has had just as much time (in fact much more) as other treatments to develop that evidence-base. No treatment (including CBT for diagnoses for which it does not have strong evidence) should be used in the absence of evidence, even if that evidence is hard to attain.
The next section of Dr. DeFife's blog post was entitled "a growing evidence base for psychodynamic therapy." This portion was based mostly upon three citations: Leichsenring and Rabung (2008), Shedler (2010), and Gerber et al (2011). I have not read the third piece yet, so I will hold off on making any comments (and get to work on reading the piece). The first two citations, however, fall into the "widely discredited" category I mentioned earlier. Rather than rehash the debates noted here and elsewhere countless other times, I'll simply direct you to these prior PBB articles detailing the many flaws of those studies in detail:
- Initial response to Shedler (2010)
- Response to media coverage of Shedler (2010)
- Response to Shedler (2011)
- Coverage of published critique of Leichsenring and Rabung (2008)
Interestingly, if you read the comments section in the second link, you'll see Dr.DeFife commented using many of the same arguments (and actually the exact same Woody Allen quote used later in his PT blog). Ultimately, if you read these links, what you'll see is that the data simply are not consistent with the claims made by the publishing authors. That does not mean that the opposite of what the authors say is true, but it means that it is inappropriate to look at those numbers and then publicize the notion that psychodynamic psychotherapy is outperforming ESTs, that longer-term treatment is needed in order to properly impact mental illness, or any other similar idea.
The next section of Dr.DeFife's piece is entitled "critiques of psychodynamic therapy research: fast and furious." Early in this section, Dr.DeFife writes:
"In critiquing or looking at critiques of meta-analyses, I'm always aware of the great opening monologue of Woody Allen's classic film Annie Hall: 'There's an old joke...two elderly women are at a Catskill mountain resort, and one of 'em says, 'Boy, the food at this place is really terrible.' The other one says, 'Yeah, I know; and such small portions.'" The critiques of meta-analyses and systematic reviews generally follow the exact same logic: Boy, the studies they review are really terrible. Yeah, I know, and they didn't include enough of them!"
Quite frankly, I think this misses the point entirely. Certainly some people critique the studies included in meta-analyses based purely upon the number included or the sample size utilized within the studies. The larger complaint, however, is that the studies are of an atrocious overall quality and do not even come close to directly addressing the important questions. Taking it even a step further, as detailed in the PBB links above, when the studies actually make an effort to compare psychodynamic therapy to ESTs for particular conditions, the results support the EST or equivalence. When looking at secondary measures - measures included in a study that were not relevant to the central hypotheses and often are not measures of the severity of the condition being treated - results sometimes paint a different picture, but equating those measures with primary measures (based on a prior hypotheses) is questionable at best.
The next section of Dr.DeFife's post was entitled "what the studies really say." I will once again refer to the links above for a detailed discussion of the data utlized in these studies. The first and third links are particularly detailed on this point (the second is the least detailed in this regard). Suffice to say that many view "what the studies really say" quite differently.
The final section of Dr. DeFife's post - "finding more worthwhile questiosn to investigate" - is the one with which I actually have the firmest disagreement. Here again, the author expressed a very similar point in his earlier PBB comment: that we're better off investigaing questions other than which treatments work better for particular conditions that others. First off, plenty of researchers are investigating such questions. The two are not mutually exclusive. Secondly, given the continued proponderance of non-evidence-based treatments in mental health and the willingness of certain researchers and media outlets to publicize false claims and/or broad claims based upon faulty data, I would argue that the need is as strong as ever to enforce a strict policy of contining to test the efficacy and effectiveness of particular treatments for particular conditions relative to alternative options. In fact, I think the funding should go to proposed studies in which individuals who are experts in particular treatments administer those treatments (e.g., one study with experts in psychodynamic and cognitive behavioral treatments for depression) to samples randomly assigned to receive one treatment versus the other and in which hypotheses are made ahead of time regarding specific variables in which one treatment is expected to outperform the other. Most consumers have no way of wading through a market of competing treatments and knowing where those treatments stand relative to one another, so the best answer may simply be impacting what comes to market in the first place by forcing treatments to perform to a particular level before being offered to clients presenting with particular needs. Knowing that the treatment being received by a person in need is, in fact, the one that tends to produce the best outcomes for particular groups of people on particular measures seems to me to be as important a question as we can ask in this field.
Anyway, just to again reiterate one of my main points here: I disagree with Dr.DeFife's conclusions, but this has nothing to do with any sort of sense of who he is as a person. I have never met him and would not hesitate to shake his hand and have a friendly and vigorous scientific debate. The point here is simply that we disagree on the nature of and information provided by the data and we both believe it is important to make our case in a manner in which individuals can read multiple viewpoints and draw informed conclusions.
Dr. Anestis is a post-doctoral fellow with the Military Suicide Research Consortium