Psychotherapy As Religion
eBook - ePub

Psychotherapy As Religion

The Civil Divine In America

William M. Epstein

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Psychotherapy As Religion

The Civil Divine In America

William M. Epstein

Book details
Book preview
Table of contents
Citations

About This Book

A provocative look at America on the couch. In Psychotherapy as Religion, William Epstein sets out to debunk claims that psychotherapy provides successful clinical treatment for a wide range of personal and social problems. He argues that the practice is not a science at all but rather the civil religion of America, reflecting the principles of radical self-invention and self-reliance deeply embedded in the psyche of the nation. Epstein begins by analyzing a number of clinical studies conducted over the past two decades that purport to establish the effectiveness of psychotherapeutic treatments. He finds that each study violates in some way the standard criteria of scientific credibility and that the field has completely failed to establish objective procedures and measurements to assess clinical outcomes. Epstein exposes psychotherapy's deep roots in the religious and intellectual movements of the early nineteenth century by demonstrating striking parallels between various types of therapy and such popular practices as Christian Science and spiritualism. Psychotherapy has taken root in our culture because it so effectively reflects our national faith in individual responsibility for social and personal problems. It thrives as the foundation of American social welfare policy, blaming deviance and misery on deficiencies of character rather than on the imperfections of society and ignoring the influence of unequal and deficient social conditions while requiring miscreants to undergo the moral reeducation that psychotherapy represents. This is a provocative, brilliantly argued look at America on the couch. Psychotherapy as Religion is essential reading for anyone interested in the history and current state of mental health.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Psychotherapy As Religion an online PDF/ePUB?
Yes, you can access Psychotherapy As Religion by William M. Epstein in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

Information

1: Depression

More than 16% of adult Americans acknowledge that they have suffered from serious depression during their lifetimes and about 7% report serious depression within a single year (Kessler, Berglund, and Demler, 2003). Yet while major depression appears to be a disabling condition, there is little evidence that any psychotherapeutic treatment for it has been effective. Nonetheless, the enormous clinical literature claims routine success in treating adults, adolescents, and a variety of targeted groups.
The contemporary debate about treating major depression centers around whether the three basic forms of psychotherapy—cognitive-behavioral therapy, behavioral therapy, and psychodynamic interpersonal therapy—are superior to drug treatment and placebos. Yet the interventions are consistent in one very important regard; they all involve clinical interventions with the patient as the object of treatment. None implicate the social environment as the principal source of depression. Therefore, none seek a social remedy. There are profound political implications for the emergence of the clinical rather than the political arena as the cultural choice for handling the problem. Indeed, the continuing insistence of the clinical literature on its ability to handle depression, especially through short treatment lasting but a few months, detracts policy attention away from the possibility of social and economic remedies for debilitating psychological reactions.
The best of the clinical literature, that is, its most scientific forms, employs randomized controlled trials, with the findings collected and summarized in a variety of metanalyses. Some areas of treatment, discussed in subsequent chapters, have not been able to achieve even this degree of formal conformity with scientific practice. The studies of studies, together with their base of primary research, constitute the general assessments of the state of the art of clinical treatment of depression. The principal reviews (Casacalenda, Perry, and Looper, 2002; Hamilton and Dobson, 2002, updating Dobson, 1989; Gaffan, Tsaousis, and Kemp-Wheeler, 1995; Robinson, Berman, and Neimeyer, 1990; and a few others) have concluded on the evidence of the best of the experimental literature that psychotherapy is a powerful and predictable cure for depression. They suggest that managed care has exercised a benign influence in reducing the amount of treatment, since short-term interventions, rarely longer than twelve weeks, are as effective as longer-term treatments. As a result treatment costs have declined, together with the disruption of patients’ lives. Cognitive-behavioral treatment seems occasionally to outpace psychodynamic interpersonal therapy, more often behavioral interventions, and sometimes even medication. Still, the metanalyses tout all four as superior to no treatment at all, that is, wait-list and placebo controls. Logically, less disabled patients do better in treatment than severely depressed patients. Even accounting for a number of practitioner and researcher loyalties to particular interventions (“researcher allegiance”), the reported benefits of psychotherapy for depression remain substantial.
Yet the metanalyses are captives of their sources; their conclusions are only as accurate and credible as the research they cite. Unfortunately the best of the research is too methodologically flawed to sustain any conclusion except indeterminacy and ineffectiveness, and always with the undercurrent of possible harm.
Despite reputations for scientific rigor, the apparently sophisticated authors, their hospitals, and their universities routinely ignore methodological pitfalls on the way to findings of effective treatment. The research itself constitutes a cultural manifesto more than scientific evidence for progress against a serious disability.
A critical analysis of the best of the clinical studies reduces their candidacy for scientific authority to social ideology that expresses deeply held cultural values rather than objective truths about repairing human dysfunction and unhappiness. The subculture of professionals that purveys psychotherapy for depression conducts factional negotiations over belief, recruiting science to serve occupational interests. Through faulty research that contrives convenient support for psychotherapy, the clinician and the researcher reveal a blind faith and partisan loyalty that overwhelm a prudent skepticism and the other tenets of science. Psychotherapy for depression is best understood as testimony for social belief rather than as clinical science.
Adults
Psychotherapy’s effectiveness in treating depression is not sustained by scrutiny of the most credible research in the field’s literature: serious methodological flaws undermine the authority of each and every study; researchers exaggerate their findings and ignore the weaknesses of their research; indeterminacy, ineffectiveness, and perhaps even harm are the only credible effects of psychotherapy for depression. Indeed, in comparison with Casacalenda et al.’s (2002) problematic review, every other metanalysis and summary builds more Panglossian findings on even less credible evidence.
Rather than psychotherapeutic interventions, the improvement that is measured in patients can more plausibly be attributed to the seasonality of depression, the demand characteristics of the research situation itself that is frequently transmitted through “the therapeutic alliance” between patient and therapist, as well as the researcher’s allegiance to positive outcomes, measurement distortion, inaccurate patient self-reports, differential attrition, and other factors. These pitfalls of research blemish the experimental literature of psychotherapy like meteorite craters on Mars and the Moon. They seem to be characteristics of the field’s research, essential to sustain its social role.
Casacalenda et al. (2002), one of the most selective reviews of psychotherapy’s effects on depression, identified only six experiments that employed “randomized controlled double blind trials for well-defined major depressive disorder in which medications, psychotherapy, and control conditions were directly compared and for which remission percentages were reported” (p. 1354). They concluded that psychotherapy was as effective as medication for nonpsychotic patients and that both were about twice as effective as control conditions: 46% remission of symptoms for the two interventions versus 24% for control patients. However, each of the six studies is seriously marred as a scientific statement. Each is a narrative for our times that seduces broad belief, a marketing device for psychotherapists, a dramatic testimonial that observes the rituals of science while negating its defining rigor and skepticism.
The earliest of the six, Herceg-Baron et al. (1979), suffered an attrition rate of 56% within the seventeen weeks of their experiment and concluded with suggestions for maintaining patients throughout the study. Reported elsewhere (DiMascio et al., 1979; Weissman et al., 1979), the clinical outcomes of their four experimental conditions—interpersonal psychotherapy alone, medication alone, a combination of the two, and a “nonscheduled treatment control” (supportive therapy on demand), each provided for the short term of sixteen weeks—are vitiated by the enormous attrition as well as by other problems of measurement. Indeed, differential attrition in all groups by itself may explain any outcome.
The authors found that all of the treatments were effective, that is, significantly superior to the control, but that the combination of short-term psychotherapy and medication was superior to either one alone, usually taking effect after only eight weeks of treatment. Clinical outcomes were measured by the Hamilton Rating Scale for Depression and the Raskin Depression Scale, both of which are highly imperfect assessment tools. It is notable that improvements measured by the Hamilton scale were dramatic for the combined group and large for medication and psychotherapy alone but that the mean improvements were clinically marginal except for the combined treatment. The Raskin improvements were small except, again, for the combined treatment.
However, attrition was very large and differential, with more than 50% of patients failing to complete sixteen weeks of all but the combined treatment, in which only 67% completed sixteen weeks. Moreover, the resulting sample of treated patients was also small, ranging from 17 in the psychotherapy group to only 23 in the medication group.
All of the patients were seen for clinical assessment after one, four, eight, twelve, and sixteen weeks, or at the termination of treatment, by a clinical evaluator (a psychiatrist or a psychologist) who was independent of and blind to the patient’s treatment. Patients were instructed not to discuss with the evaluator the type of treatment they were receiving. The treating psychiatrist also evaluated the patient. . . . [A]greement between clinical evaluator and psychiatrist . . . was excellent. (Weissman et al., 1979, p. 556)
This, however, does not mean that raters were either blind to the group assignments of the patients nor independent. To the contrary, despite instructions to hide their assignments, the patients probably offered clues. After all, the five assessment points provided many patients with a variety of opportunities for disclosure. Further, the agreement between raters employed by the same department in the same research institution, in this case Yale University’s medical school, probably reflects the symmetry of their motives, preferences, and commitments more than any capacity to assess neutrally and objectively the outcomes of their chosen craft. The obligation is on the researcher’s shoulders to deflect potential criticisms of bias by employing credible methods, but the authors did not make the effort to enlist truly independent raters. To the contrary, their decision to rely on an obviously beholden group of evaluators raises doubts about the independence of the research itself.
In the end, it is provocative that such imperfect research should be cozened by such prestigious and culturally central organizations: Yale University hosted the experiment, the federal government funded it through the National Institute of Mental Health and the Alcohol, Drug Abuse, and Mental Health Administration, and the findings were published in the American Journal of Psychiatry and the Archives of General Psychiatry, two of the world’s most influential and respected periodicals, presumably because of their commitment to the canons of scientific research. The institutional acceptance of faux science begins to suggest that social meaning rather than objective authority was the point of the butchered experiment, a triumph of autobiography over clinical science and imagination over history.
The second of the six, The National Institute of Mental Health’s Treatment of Depression Collaborative Research Program (TDCRP), is reported principally in Elkin et al. (1989) but also in additional papers (Imber et al., 1990; Sotsky et al., 1991). It is extensively cited throughout the literature as evidence for the efficacy of psychotherapy. TDCRP assigned a total of 250 patients at three sites to four conditions: cognitive-behavioral therapy, interpersonal therapy, medication, and a drug placebo plus minimal support. Therapy lasted for sixteen weeks.
The researchers concluded that “all treatment conditions . . . evidenced significant change from pretreatment to posttreatment. . . . The results for the two psychotherapies fell between those for [medication] and [drug placebo plus minimal support] (p. 980).” Yet the drug placebo with minimal support therapy is commonly and appropriately employed as a control for true treatments in Weissman et al. (1979). Thus the therapies rarely improved on the effectiveness of the placebo. In a few regards, the placebo group did better than the treated groups; outcomes were measured as personal changes, sleep disturbances, appetite changes, and so forth. Moreover, a host of methodological problems, notably including large and probably differential attrition of 38% as well as practice effects in measurement, further undermined the research.
Rather than explaining the comparability of outcomes among the groups as mysteriously resulting from undefined “core processes” of therapeutic value, the researchers might have more modestly and honestly accepted their drug placebo minimal-support condition as a true placebo control, with the consequence that all of their interventions were largely ineffective. This seems reasonable since sixteen weeks, and frequently less, of uncertain therapy is an improbable strategy for resolving serious mental and emotional conditions. Even though a subsequent replication failed to confirm the TDCRP findings (McLean and Taylor, 1992), Elkin et al.’s (1989) hopefulness and disingenuous research continues to be offered routinely in support of psychotherapy for depression.
A. I. Scott and Freeman (1992), the third of the six, compared four conditions: medication, cognitive-behavioral therapy, social-work counseling and casework, and routine care provided by a general practitioner. The authors concluded that at the end of sixteen weeks of care “the severity of depressive symptoms declined markedly in all treatment groups, and any differences in clinical efficacy between [general practitioner care and the other treatments] were not commensurate with the differences in the length and cost of treatment” (p. 887). At the end of treatment, only the social-work intervention provided significantly better outcomes than general-practitioner care, although the patients in the social-work group were substantially older and less ill than patients in the other group. To its credit, the study counted attrition as treatment failure. However, patients were not blind to the raters; follow-up data were not collected; and the procedures of both the cognitive-behavioral therapy and the social-work treatment were not manualized, with the result that the actual content of treatment in these two groups remains uncertain.
Most important, however, there was no true nontreatment control. The study conveniently relied upon the conclusions of the literature that all of its chosen treatments were effective. This is a problematic assumption (discussed below), emerging from similarly deficient studies that usually compared, but imperfectly, wait-list nontreatment conditions to therapeutic interventions. True placebo controls enormously reduce the reported efficacy of psychotherapy. Furthermore, general-practitioner care provided little psychotherapeutic content even considering the substantial number of referrals that were made to specialized mental-health services. Indeed, general-practitioner care may be a placebo for psychotherapy, in which case the similarity of outcomes, if not an artifact of repeated measurement itself, is a poor testimonial to the efficacy of psychotherapy. It is plausible, in fact likely, that a substantial number of depressed patients would have recovered with no treatment at all; spontaneous remission—self-cure—is a critical tenet of scientific skepticism, especially in clinical research. Thus A. I. Scott and Freeman’s (1992) experiment is invalidated by the absence of a true nontreatment control; the likelihood of natural recovery; the important differences between groups in age, sex, and the severity of depression; the lack of blinding; and the inappropriate assumption that all of their conditions were effective. Moreover, the absence of follow-up also prevents an assessment of the duration of effects, including the degree to which some patients may have deteriorated as a result of treatment. A. I. Scott and Freeman (1992) stands as evidence that compromised research leads to agreeable findings that falsely become the credentialized tenets of later designs. This is the process of legend and myth but screened through the lore of science.
Mynor-Wallis, Gath, Lloyd-Thomas, and Thomlison (1995), the fourth study, did employ a placebo control for medication and problem-solving treatment (perhaps analogous to cognitive-behavioral therapy) for depressed patients in primary care. They found that only 3 1/2 hours of problem solving provided a significant improvement over the placebo control: 60% of patients given problem solving had recovered, compared with only 27% of placebo patients. It is intriguing that the placebo condition seems to be the same one employed by TDCRP—drug placebo plus standard clinical management, that is, “supportive therapy”—but the TDCRP results were very different, with comparable success for all of the treatment conditions. It seems plausible that the different research assumptions about drug placebo plus standard clinical management were transmitted as subtle demand characteristics in creating the different findings. TDCRP considered the condition a treatment and created measurement incentives to report efficacy; in contrast, Mynor-Wallis et al. (1995) and Weissman et al. (1979) considered the condition to be a placebo and thus created the incentives to minimize its success. Yet Mynor-Wallis et al. (1995) did not conduct a follow-up assessment, which would seem to be necessary to estimate the value of the recovery past the treatment situation itself. Indeed, reports of recovery may diminish as patients are freed from any obligation or gratitude to the therapist or clinic.
Mynor-Wallis et al. (1995) also claimed that the assessments of outcomes “were made by one of two experienced research interviewers who were blind to the type of treatment” (p. 441). However, this seems unlikely, as A. I. Scott and Freeman (1992) recognized, especially after raters may conduct as many as three interviews with the same patients, who probably provide clues to their assignment. But most important, the instruments themselves, even when applied by neutral judges, are inadequate and unreliable (as discussed in chapter 6). They often rely upon patient self-report, which is greatly affected by a variety of research conditions that influence the patient’s response. It is a standing and near universal indictment of psychotherapeutic research that it fails to enlist evaluators who are independent of the research situation or of the occupational protectiveness of therapists.
Furthermore, 26 of 91 patients failed to complete 6 sessions of treatment over the 12 sessions of treatment; this 29% attrition rate is increased by an additional six patients who apparently refused to be interviewed (“data missing”). Thus the effective attrition rate climbs to 34%, far higher than the 20% that the authors initially suggested was tolerable. They apparently did not include the lost patients as treatment failures, which probably would have wiped out their reported gains. Yet their results are reported for patients who completed a minimum of only 4 sessions of treatment. Such a large number of superficially treated patients reinforces the possibility that the study’s outcomes are artifacts of distorted measurement procedures rather than true effects of the experimental interventions.
The fifth experiment endorses specialized care for current major depression. Shulberg et al. (1996) claimed that patients receiving medication or interpersonal psychotherapy were better off eight months after the beginning of treatment than primary-care patients. Yet again, attrition rates were stunning: only about 50% of the medication group and the psychotherapy group completed acute treatment, while fewer than 30% of the medication group and only 42% of the psychotherapy group completed continuation care. While some patients left treatment by themselves, others were “dropped when judged by their clinicians to be nonresponders” (p. 916). Only 10 of 92 patients dropped out of primary care. If noncompleters are considered failures (and their initial Hamilton rating scores are carried through the analysis), then the positive findings evaporate. As it is, outcomes measured as the severity of depression among the groups, while statistically significant, are not large. Differences between psychotherapy and usual physician care are actually tiny. Moreover, even these very credulous authors raise a question about initial selection bias, since many acutely depressed patients refused to participate in their experiment.
Thus Shulberg et al. (1996) proved nothing at all except that supposedly sophisticated funding sources, including the National Institute of Mental Health and a roster of the most prominent American research foundations, were more anxious to endorse a cultural form than to practice science. Funding sources reviewing Shulberg et al.’s initial research protocols should have voiced concern about sampling, measurement, and analysis, raising cautions that the research was premature and badly designed. This did not occur, intruding the possibility that motives of social compatibility rather than treatment success justified the research. Moreover, by publishing the findings in their present form, the Archives of General Psychiatry becomes complicit in propagating this fiction of clinical efficacy.
The last of Casacalenda et al. (2002) six studies purports to prove the superiority of both cognitive-behavioral therapy and medication to a placebo control (clinical management and placebo medication) as treatments for major depression with atypical features. Jarrett, Schaffer, McIntire, Witt-Browder, Kraft et al. (1999) randomized 108 patients to the three groups for ten weeks of treatment. Twice as many patients in the treated groups as in the placebo group benefited, although a more stringent definition of success wiped out the differences. Jarrett et al.’s control was the same as TDCRP’s control, but here placebo medication and clinical management produced little benefit. Attrition was low in the treated groups but 23 of 36 patients dropped out of the placebo group; presumably many sought alternative care; perhaps only the most persistent and the most debilitated stayed. No follow-up was reported, and again ratings were claimed to be blind. The authors raise questions about the representativeness of the treated sample, since many prospective participants refused to participate, resulting in a typical patient who “was a white female ap...

Table of contents