The Great Ideas of Clinical Science
eBook - ePub

The Great Ideas of Clinical Science

17 Principles that Every Mental Health Professional Should Understand

  1. 448 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Great Ideas of Clinical Science

17 Principles that Every Mental Health Professional Should Understand

About this book

The idea that there is a fundamental rift between researchers and practitioners should not come as a surprise to anyone familiar with the current literature, trends, and general feelings in the field of clinical psychology. Central to this scientist-practitioner gap is an underlying disagreement over the nature of knowledge - namely that while some individuals point to research studies as the foundation of truth, others argue that clinical experience offers a more adequate understanding of the causes, assessment, and treatment of mental illness.

The Great Ideas of Clinical Science is an ambitious attempt to dig beneath these fundamental differences, and reintroduce the reader to unifying principles often overlooked by students and professionals alike. The editors have identified 17 such universals, and have pulled together a group of the most prolific minds in the field to present the philosophical, methodological, and conceptual ideas that define the state of the field. Each chapter focuses on practical as well as conceptual points, offering valuable insight to practicing clinicians, researchers, and teachers of any level of experience. Written for student, practitioner, researcher, and educated layperson, this integrative volume aims to facilitate communication among all mental health professionals and to narrow the scientist-practitioner gap.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Great Ideas of Clinical Science by Scott O. Lilienfeld, William T. O'Donohue, Scott O. Lilienfeld,William T. O'Donohue in PDF and/or ePUB format, as well as other popular books in Psychology & Clinical Psychology. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2012
eBook ISBN
9781135930172
Edition
1

PART
I

How to Think Clearly About Clinical Science

CHAPTER
1

Science Is an Essential Safeguard Against Human Error

WILLIAM T. O'DONOHUE, SCOTT O. LILIENFELD, AND KATHERINE A. FOWLER

As behavioral health professionals, we justify our professional titles—that of clinical psychologist or any of its cognates (e.g., counseling psychologist, psychiatrist, social worker, psychiatric nurse, marriage and family counselor, psychotherapist) by our specialized knowledge. Simply put, we ought not merely believe something to be true but should actually know it to be true on the basis of good evidence. Our clients hire us largely out of their beliefs that we possess specialized knowledge and skills. Our knowledge of the evidence relating to human behavior and its problems, including our knowledge of the limitations of this evidence, is the warrant that justifies all the benefits that we acquire in our professional role: pay, honorific titles, and special abilities to perform certain acts (such as admitting patients to hospitals). If we do not in fact possess such special knowledge and abilities, then we are in effect engaged in a deceptive sham in which we are illegitimately acquiring our special role and treatment (Dawes, 1994; O'Donohue & Lilienfeld, in press). In such cases, clients are placing their trust, futures, and interests in the hands of individuals who may not have earned it.
In this chapter, we discuss the advantages and the necessity of adopting a scientific perspective concerning psychopathology and its treatment. We argue that there is persuasive scientific evidence that we as human cognitive agents can come all too easily to plausible, but erroneous, beliefs. For example, we can believe that x is associated (e.g., correlated) with y when it is not or that x causes y even when it does not (e.g., that a certain treatment reliably produces certain outcomes). Furthermore, we contend that specialists and experts, such as behavioral health professionals, possess a special duty to remain vigilant against erroneous ways of thinking and to hold beliefs that are justified by the best scientific evidence available.
Most centrally, we maintain that science—an applied epistemology (approach to knowledge) that features specialized ways of forming beliefs—is the best way to minimize error in our web of beliefs. Science, we propose, is the best safeguard we have at our disposal against commonplace biases and lapses in reasoning to which we are all prone. As Carl Sagan (1995) observed, the protections afforded by science are especially crucial when testing our most cherished beliefs, such as those derived from our own preferred theoretical orientations. We also argue that science provides the most trustworthy basis for solving the myriad problems we confront in behavioral health—problems related to what causes disorders and how to measure and treat them.
Thus, clinical science entails that behavioral health professionals possess what we call an epistemic duty—a duty to know. Moreover, this epistemic duty is best enacted through a critical knowledge of the scientific method in psychology and the relevant scientific literature (see also McFall, 1991). We agree with McFall (1991) that many popular competing views of an appropriate epistemology for behavioral health are mistaken. Finally, we contend that science offers the best way to meet our epistemic duties and to solve the growing problems that face us as a profession.

KNOWLEDGE

To be effective clinical scientists, we must base our actions and decisions on knowledge. We should not simply guess or believe, but instead know how nature, in this case human nature, actually operates to influence behavior. Epistemology is the branch of philosophy that addresses such questions as ā€œWhat is knowledge?ā€ and ā€œWhat are the best ways to acquire knowledge?ā€ One of its main tasks in the former case is to distinguish knowledge from other forms of belief, such as mere opinion, armchair speculation, false belief, and unwarranted belief.
Although epistemology can be dated back as far as Plato in the fourth century B.C., there have been dramatic changes in the study of knowledge in the 20th and now 21st centuries. Because something called ā€œscienceā€ has produced an unprecedented accumulation of accurate knowledge, epistemologists have turned to the question of ā€œWhat is special about science that has made it so fertile in producing such knowledge?ā€ This is one of the central questions of a specialty area known as the philosophy of science.
We live in an era in which scientific triumphs are taken increasingly as commonplace. Therefore, it may be worthwhile to reflect briefly on the fundamental shifts in knowledge and daily life that have accrued from the ā€œscientific revolution.ā€
Before the scientific revolution, we did not know whether the sun revolved around the earth or vice versa. We did not understand gravity or other laws of motion. Electricity was unknown. So were the causes and cures of most diseases. Without an understanding of microscopic organisms, such as bacteria and viruses, and their interactions with parts of the human body, little could be done to understand—let alone treat or prevent—much sickness and causes of death. The number and nature of chemical elements were poorly understood. As a consequence, what is now known as material science was also unknown. Thus, the technology that flows from material science to produce everything from Post-itĀ® notes, to enduring and safe toys, to car bodies that are lightweight (for gas mileage), rust-resistant, and strong (for safety), could not be developed. Advances in botany facilitated the agricultural revolution, so that many people were freed from farming and left to pursue activities that satisfied other human needs, such as acquiring knowledge. Advances in engineering have allowed computer hardware to become inexpensive and amazingly efficient. It is fair to say that our everyday Western society—especially much of its comforts, relative safety, and efficacy—is so imbued with science and technology that they have become part of the background that we take for granted. Moreover, we have so counted on science as a problem-solving mechanism that when we experience such problems as oil shortages, impending flu pandemics, and potential terrorist attacks, we look to scientists to help us solve them.
It is also easy to take for granted many of the previous scientific achievements in behavioral health. In the first world, few if any mental health hospitals can today be called ā€œsnake pits.ā€ However, before the rise of effective antipsychotic medications in the 1950s, the situation was far different. The delusions and hallucinations of individuals with schizophrenia were so unmanageable that patients were put in cells, chained to chairs, or, if not controlled, yelling and spreading their feces on the walls. As most readers of abnormal psychology textbooks know, the word ā€œbedlamā€ derived from a cockney pronunciation of Bethelem—a mental hospital in England in which chaos reigned supreme. In addition, effective technologies based on learning principles have been developed to help mentally retarded and autistic children learn a wide range of functional skills, including language. Even bedwetting can be successfully treated with bell and pad technologies (see Chambless et al., 2006; O'Donohue & Fisher, in press).
Thus, we have come a long way with the diagnosis and treatment of many behavioral health problems. Other such problems have been refractory, either because they have received scant scientific attention (for example, many of the paraphilias or personality disorders) or because the efforts to resolve these problems have yet to yield positive results (Laws & O'Donohue; 2001; O'Donohue, Fowler, & Lilienfeld, 2005).

Why Is Science Necessary?

One of the major reasons why science is necessary is that humans often form firmly held beliefs that are mistaken. This tendency is hardly limited to practicing clinicians, as academic researchers are probably just as prone to such errors as everyone else (Meehl, 1993). Compounding the problem of firmly held but erroneous beliefs is the fact that most people are blissfully unaware of their own cognitive biases. For example, Pronin, Gilovich, and Ross (2004) found evidence for what they termed ā€œbias blind spot,ā€ whereby most people are adept at pointing out cognitive biases in others but incapable of recognizing the same biases in their own reasoning. Nevertheless, good scientists, including good clinical scientists, are aware of their propensities toward bias and rely on scientific methods to compensate for them. As noted earlier, it is especially crucial to avail ourselves of these scientific methods when our favored theoretical beliefs are at stake.
There are numerous examples of erroneous beliefs in history, from earth-centered views of the universe, to misestimating the size of the earth, to believing that human physiology was a function of the moon and four basic humors, to believing that there were only four kinds of elements—earth, water, fire, and air. Psychologists and philosophers have studied and begun to categorize the myriad ways in which human cognition is subject to error. We will discuss three of the most important errors here (see also Chapter 2).

Confirmation Bias

ā€œThe mother of all biases,ā€ confirmation bias, is probably the central bias that the scientific method was developed to circumvent. We can define this bias as the tendency to selectively seek out and recall information consistent with one's hypotheses and to neglect or reinterpret information inconsistent with them.
Several investigators have found that clinicians fall prey to confirmation bias when asked to recall information regarding clients. For example, Strohmer, Shivy, and Chiodo (1990) asked counselors to read three versions of a case history of a client, one containing an equal number of descriptors indicating good self-control and poor self-control, one containing more descriptors indicating good than poor self-control, and one containing more descriptors indicating poor than good self-control. One week after reading this case history, psychotherapists were asked to offer as many factors they could remember that ā€œwould be helpful in determining whether or not [the client] lacked self-controlā€ (p. 467). Therapists offered more information that would be helpful for confirming than disconfirming the hypothesis that the client lacked self-control, even in the condition in which the client was characterized mostly by good self-control descriptors.
Researchers, too, are prone to confirmation bias. For example, Mahoney (1977) asked 75 journal reviewers with strong behavioral orientations to evaluate hypothetical manuscripts that contained identical research designs but strikingly different results. In some cases, these results were consistent with traditional behavioral views (reinforcement strengthens motivation), whereas in other cases they contradicted these views (reinforcement undermines motivation). Reviewers were far more likely to evaluate the paper positively if it confirmed their preexisting views (e.g., ā€œA very fine study,ā€ ā€œAn excellent paper ā€¦ā€) than if it disconfirmed them (e.g., ā€œThere are so many problems with this paper, it is difficult to decide where to begin,ā€ ā€œa serious, mistaken conclusionā€).
Similarly, there is evidence that clinicians are prone to the related phenomenon of premature closure in diagnostic decision making: they frequently reach conclusions on the basis of too little information (Garb, 1989). For example, Gauron and Dickinson (1969) reported that psychiatrists who observed a videotaped interview frequently formed diagnostic impressions within 30 to 60 seconds. Premature closure may be both a cause and a consequence of confirmation bias. It may produce confirmation bias by effectively halting the search for data that could refute the clinicians’ preexisting hypotheses. It may result from confirmation bias because clinicians may reach rapid conclusions by searching selectively for data that confirm these hypotheses.

Illusory Correlation

Clinicians, like all individuals, are prone to illusory correlation, which we can define as the perception of (a) a statistical association that does not exist or (b) a stronger statistical association than is present. Illusory correlations are especially likely to arise when individuals hold powerful a priori expectations regarding the covariation between certain events or stimuli. Such correlations are almost certainly in part a product of our propensity to detect meaningful patterns in random data (Gilovich, 1991). Although this tendency is often adaptive in that it can help us to make sense of our confusing external worlds, it can lead us astray in certain situations (see also Chapter 2).
For example, many individuals are convinced that a strong correlation exists between the full moon and psychiatric hospital admissions, even though research has demonstrated convincingly that this association is a mental mirage (Rotton & Kelly, 1985). Moreover, many parents of autistic children are certain that the onset of their children's disorder coincides with the administration of mercury-bearing vaccines, although large and carefully conducted epidemiological investigations disconfirm this association (Herbert, Gaudiano, & Sharp, 2002).
In a classic study of illusory correlation, Chapman and Chapman (1967) examined why psychologists perceive clinically meaningful associations between signs (e.g., large eyes) on the Draw-A-Person (DAP) test (a commonly used human figure drawing task) and psychiatric symptoms (e.g., suspiciousness), even though research has demonstrated that these associations do not exist (Kahill, 1984). They presented undergraduates with DAP protocols that were purportedly produced by psychiatric patients with certain psychiatric symptoms (e.g., suspiciousness). Each drawing was paired randomly with two of these symptoms, which were listed on the bottom of each drawing. Undergraduates were asked to inspect these drawings and estimate the extent to which certain DAP signs co-occurred with these symptoms.
Chapman and Chapman found that participants ā€œdiscoveredā€ that certain DAP signs tended to co-occur consistently with certain psychiatric symptoms, even though the pairing between DAP signs and symptoms in the original stimulus materials was entirely random. For example, participants perceived large eyes in drawings as co-occurring with suspiciousness, and broad shoulders in drawings as co-occurring with doubts about manliness. Interestingly, these are the same associations that tend to be perceived by clinicians who use the DAP (Chapman & Chapman, 1967). Illusory correlation has been demonstrated with other projective techniques, including the Rorschach (Chapman & Chapman, 1969) and sentence completion tests (Starr & Katkin, 1969). Illusory correlation may be most likely when, as in the case of the DAP, individuals hold strong a priori expectations regarding the associations between stimuli.

Hindsight Bias

Individuals tend to overestimate the likelihood that they would have predicted an outcome once they have become aware of it, a phenomenon known as hindsight bias (Fischhoff, 1975) or the ā€œI knew it all along effect.ā€ Arkes et al. (1981) examined the effects of hindsight bias on medical decision making. Physicians were assigned randomly to one of five groups, each of which was given the same case history. The foresight group was asked to assign a probability estimate to each of four potential medical diagnoses. Each of the four hindsight groups was told that one of the four diagnoses was correct, and was then asked to predict the likelihood that they would have selected that diagnosis. The hindsight groups assigned the least likely diagnoses indicated a much greater likelihood...

Table of contents

  1. Cover Page
  2. Half Title page
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Dedication
  7. Foreword
  8. About the Editors
  9. Contributors
  10. Introduction
  11. Part I How to Think Clearly About Clinical Science
  12. Part II The Great Paradigms of Clinical Science
  13. Part III The Great Crosscutting Perspectives of Clinical Science
  14. Index