by Scott O. Lilienfeld, Ph.D.
Is there a gap between the research evidence concerning the most effective psychological techniques and the implementation of these techniques in routine clinical practice? As is so often the case, the answer depends on whether we view the glass as half full or half empty. Nevertheless, any reasonable evaluation of the survey evidence raises serious cause for concern. This issue is of far more than academic importance. If large proportions of practitioners are relying on treatment and assessment methods that have found to be wanting or have been insufficiently researched, they are placing thousands, perhaps tens of thousands, of mental health consumers at risk.The magnitude of research-practice gap, presuming it exists, has assumed new urgency with the recent publication of an influential, widely publicized, and controversial monograph by Timothy Baker, Richard McFall, and Varda Shoham (2009) (click here for PBB's coverage of this article) and published in Psychological Science in the Public Interest. The core message of the Baker et al. report is that the state of psychology education, training, and practice is akin to the pre-scientific state of medical education prior to the 1910 Flexner report, which radically revamped the state of training in medical schools.
What do survey data on the use of scientifically supported interventions among psychologists and other mental health professionals indicate? Such surveys, we must acknowledge, have their methodological shortcomings. They are based on potentially unrepresentative samples (clinicians who are willing to complete them) and self-reported information. If anything, both limitations seem likely to underestimate the prevalence of scientifically unsupported techniques: Therapists who practice unscientifically may be reluctant to return surveys inquiring about their use of clinical techniques and to underreport their use of unscientific methods. Bearing these caveats in mind, selected surveys of practitioners demonstrate that:
- Most therapists who treat clients with eating disorders do not use scientifically supported treatments for these conditions (i.e., cognitive-behavioral, behavioral, interpersonal therapies; Mussell, Crosby, Crowe et al., 2001).
- A half or more of doctoral-level licensed therapists who treat obsessive-compulsive disorder (OCD) do not use the clear-cut scientific treatment of choice for this condition, namely exposure and response (or “ritual”) prevention (Freheit, Swann, Vye, & Cady, 2004). Instead, increasing numbers are using energy therapies and eye movement desensitization and reprocessing, neither of which have been scientifically supported for OCD (Freheit et al., 2004). Similarly, only a half of licensed and provisionally licensed psychotherapists who treat patients with posttraumatic stress disorder use imaginal exposure, which is the most scientifically established treatment for this condition (Becker, Zayfert, & Anderson, 2004). Moreover, fewer than 30% of these psychologists have even been trained in exposure techniques (Becker et al., 2004).
- One third of children with autism and autism-spectrum disorders receive scientifically unsupported interventions, like sensory-motor integration therapy and facilitated communication (Levy & Hyman, 2003), the latter of which appears to be mounting a major comeback in the media, if not in actual clinical practice (Wick & Smith, 2009).
- Approximately 25% of U.S. doctoral-level licensed psychotherapists routinely use two or more suggestive techniques, like repeated prompting and cuing, journaling, guided imagery, body work, and symptom interpretation, to recover purported memories of early abuse [Poole, Lindsay, Memon, & Bull, 1995; see Polusny & Folette, 1996, for comparable results among members of American Psychological Association (APA) Divisions 12 and 16].
- Thirty-nine percent of APA members who conduct clinical assessments “always or frequently” use human figure drawings in their practice despite consistent evidence that such drawings are invalid for the overwhelming majority of diagnostic purposes; 43% “always or frequently” use the Rorschach Inkblot Test despite consistent evidence that only a handful of the clinical inferences derived from this measure are empirically supported (Watkins, Campbell, Nieberding, & Hallmark, 1995).
- Among Division 12 members, 98% use clinical prediction in contrast to only 31% who use actuarial (statistical or mechanical) prediction methods, despite the increasing availability of the latter methods and overwhelming research evidence that actuarial prediction is almost always equal or superior to – and far less expensive than – clinical prediction. Moreover, when asked why they do not use actuarial prediction methods, many practitioners provide factually unsupportable or conceptually confused responses, claiming that such methods are less accurate (32%), more expensive (27%), or even less efficient (23%) than clinical methods (Vrieze & Grove, 2009).
Regrettably, some of these data derive from the mid 1990s, and more recent systematic data are unavailable. In particular, updated information on clinicians’ use of recovered memory and projective techniques is sorely needed. Nevertheless, the data presented here are clearly worrisome, and suggest that although many practitioners – including APA members - have been using scientifically supported methods (the good news), hefty pluralities of them are not (the bad news). These and other survey data, some of which Baker et al. (2009) cited, raise troubling questions concerning the scientific basis of our discipline. They strongly suggest that the research-practice gap is genuine, and that many Americans are receiving suboptimal mental health care.
So how has the field, in particular the APA, reacted to the Baker et al. (2009) monograph and to survey data demonstrating the existence of a marked research-practice gap among its membership? Disappointingly, the APA’s response has overwhelmingly been to circle the wagons. Steven Breckler (2010), Executive Director of APA’s Science Directorate, wrote that “Clinicians do rely on the science” and suggested that “the claim by Baker et al. that clinicians eschew science in favor of clinical experience is not very well founded” (note, however, that Breckler’s statement oversimplifies Baker et al.’s position, as they did not apply this description to all clinicians). In response to a Newsweek article discussing the Baker et al. monograph, Katherine Nordal (2009), APA’s Director of Professional Practice, wrote that the claim that many or most clinicians do not learn or practice effective treatments is “certainly unclear and not backed with (sic) good evidence.” She continued: “The American Psychological Association has a code of ethics for its members that dictates psychologists must base their clinical judgments on scientific and professional knowledge. Licensed psychologists practice within their areas of expertise as required by state regulations and our ethical principles.” Nordal’s assertions are remarkable for what they do not inform readers: Namely, that this feature of the APA ethics code is virtually never enforced, and that APA rarely, if ever, sanctions practitioners who practice unscientifically.And, in a letter to the New York Times taking issue with research evidence that most patients with major depression receive inadequate care, James Brush, former president of the Ohio Psychological Association, voiced a different view. According to Brush (2010), the debate concerning evidence-based therapies needlessly pits researchers against practitioners given that most of the latter are already relying on good science to inform their treatments. Moreover, Brush wrote, “It is also untrue that psychologists do not learn about evidenced-based treatments. Nearly all psychologists in all states must take continuing education and are thereby exposed to the current research about psychological conditions and treatments.” Yet Brush did not inform readers that there is no requirement that continuing education (CE) courses are grounded in adequate science. Nor did he mention that CE courses offered by APA-approved sponsors have included training in crisis debriefing, rebirthing, imago relationship therapy, Jungian sandplay therapy, and a plethora of other therapeutic methods devoid of scientific support. Crisis debriefing, for example, has been found to be either worthless or harmful in treating trauma-exposed victims.
Inexplicably, Breckler, Nordal, and Brush all neglected to discuss, let alone address, the survey data reviewed here. As a consequence, they failed to rebut the central evidence on which concerns regarding the research-practice gap are based. Breckler, for example, criticized Baker et al.’s interpretation of a vignette study asking clinicians to make hypothetical judgments about evidence, but ignored survey data regarding clinicians’ actual practices.APA’s defensive attitude to the data presented in the Baker et al. report is ultimately counterproductive. APA would instead be wise to follow the lead of companies that have implemented quality improvement (QI) approaches premised on the philosophy that acknowledging errors, and rooting out these errors, is the best route to effective practice. One of Honda’s credos, for example, is “Our customers are satisfied because we never are.” Or witness Toyota’s recent response to the problem afflicting its accelerator pedals: It swiftly acknowledged the problem, apologized unequivocally to its customers, and instituted an around-the-clock policy to fix it. Or Dominos’ Pizza recent admission that its pizza crust tasted “like cardboard” and needed to be replaced by a superior recipe.
Rather than cavalierly dismissing criticisms of the scientific quality of clinical psychology, APA could benefit from a humbler approach that embraces the view that “We agree that we can and should do better, and we’re going to try.” Nevertheless, the first step toward QI is an open admission of the need for improvement, an admission that APA seems reluctant to make – probably because it lacks the courage to ruffle the feathers of a sizeable proportion of its membership. APA’s priority should rest not with protecting the feelings of its members, but with the mental health of the general public.
If you would like to learn more about this or other topics discussed on the site, PBB recommends that you consult their online store for scientifically-based psychological resources.