Psychological Science Under Scrutiny
eBook - ePub

Psychological Science Under Scrutiny

Recent Challenges and Proposed Solutions

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Psychological Science Under Scrutiny

Recent Challenges and Proposed Solutions

About this book

Psychological Science Under Scrutiny explores a range of contemporary challenges to the assumptions and methodologies of psychology, in order to encourage debate and ground the discipline in solid science. 
  • Discusses the pointed challenges posed by critics to the field of psychological research, which have given pause to psychological researchers across a broad spectrum of sub-fields
  • Argues that those conducting psychological research need to fundamentally change the way they think about data and results, in order to ensure that psychology has a firm basis in empirical science
  • Places the recent challenges discussed into a broad historical and conceptual perspective, and considers their implications for the future of psychological methodology and research
  • Challenges discussed include confirmation bias, the effects of grant pressure, false-positive findings, overestimating the efficacy of medications, and high correlations in functional brain imaging
  • Chapters are authored by internationally recognized experts in their fields, and are written with a minimum of specialized terminology to ensure accessibility to students and lay readers

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Psychological Science Under Scrutiny by Scott O. Lilienfeld, Irwin D. Waldman, Scott O. Lilienfeld,Irwin D. Waldman in PDF and/or ePUB format, as well as other popular books in Psychology & Research & Methodology in Psychology. We have over one million books available in our catalogue for you to explore.

Information

Part I
Cross‐Cutting Challenges to Psychological Science

1
Maximizing the Reproducibility of Your Research

Open Science Collaboration1
Commentators in this book and elsewhere describe evidence that modal scientific practices in design, analysis, and reporting are interfering with the credibility and veracity of published literature (Begley & Ellis, 2012; Ioannidis, 2005; Miguel et al., 2014; Simmons, Nelson, & Simonsohn, 2011; see Chapters 2 and 3). The reproducibility of published findings appears to be lower than many would expect or desire (Fuchs, Jenny, & Fiedler, 2012; Open Science Collaboration, 2015; Pashler & Wagenmakers, 2012). Further, common practices that interfere with reproducibility are maintained by incentive structures that prioritize innovation over accuracy (Nosek, Spies, & Motyl, 2012). Getting deeper into the metascience literature reviewing scientific practices might lead to a discouraging conclusion for the individual scientist – “I cannot change the system on my own, so what should I do?”
This chapter provides concrete suggestions for increasing the reproducibility of one’s own research. We address reproducibility across the research lifecycle: project planning, project implementation, data analysis, reporting, and programmatic research strategies. We also attend to practical considerations for surviving and thriving in the present scientific culture, while simultaneously promoting a cultural shift toward transparency and reproducibility through the collective effort of independent scientists and teams. As such, practical suggestions to increase research credibility can be incorporated easily into the daily workflow without requiring substantial additional work in the short term, and perhaps saving substantial time in the long term. Further, journals, granting agencies, and professional organizations are adding recognition and incentives for reproducible science such as badges for open practices (Kidwell et al., 2016) and the TOP Guidelines for journal and funder transparency policies (Nosek et al., 2015). Doing reproducible science will increasingly be seen as the way to advance one’s career, and this chapter may provide a means to get a head start.

Project Planning

Use high‐powered designs

Within the nearly universal null hypothesis significance testing (NHST) framework, there are two inferential errors that can be made: (1) falsely rejecting the null hypothesis (i.e., believing that an effect exists, even though it does not), and (2) falsely failing to reject it when it is false (i.e., believing that no effect exists, even though it does). “Power” is the probability of rejecting the null hypothesis when it is false, given that an effect actually exists (see Chapters 3 and 4). Power depends on the size of the investigated effect, the alpha level, and the sample size.2 Low statistical power undermines the purpose of scientific research; it reduces the chance of detecting a true effect, but also, perhaps less intuitively, reduces the likelihood that a statistically significant result reflects a true effect (Ioannidis, 2005). The problem of low statistical power has been known for over 50 years: Cohen (1962) estimated that, in psychological research, the average power of studies to detect small and medium effects was 18% and 48%, respectively, a situation that had not improved almost 25 years later (Sedlmeier & Gigerenzer, 1989). More recently, Button and colleagues (Button et al., 2013) showed that the median statistical power of studies in the neurosciences is between 8% and 31%.
Considering that many of the problems of low power are well known and pernicious, it should be surprising that low‐power research is still the norm. Some reasons for the persistence of low‐powered studies are: (1) resources are limited, (2) researchers know that low power is a problem but do not appreciate its magnitude, and (3) there are insidious, perhaps unrecognized, incentives for engaging in low‐powered research when publication of positive results is the primary objective. That is, it is easier to obtain false positive results with small samples, particularly by using one’s limited resources on many small studies rather than one large study (Bakker, van Dijk, & Wicherts, 2012; Button et al., 2013; Ioannidis, 2005; Nosek et al., 2012). Given the importance of publication for academic success, these are formidable barriers.
What can you do? To start with, consider the conceptual argument countering the publication incentive. If the goal is to produce accurate science, then adequate power is essential. When studying true effects, higher power increases the likelihood of detecting them. Further, the lure of publication is tempting, but the long‐term benefits are greater if the published findings are credible. Which would you rather have: more publications with uncertain accuracy, or fewer publications with more certain accuracy? Doing high‐powered research will take longer, but the rewards may last longer.
Recruiting a larger sample is an obvious benefit, when feasible. There are also design strategies to increase power without more participants. For some studies, it is feasible to apply within‐subject and repeated‐measurement designs. These approaches are more powerful than between‐subject and single‐measurement designs. Repeated‐measures designs allow participants to be their own controls, reducing data variance. Also, experimental manipulations are powerful, as they minimize confounding influences. Further, reliable outcome measures reduce measurement error. For example, all else being equal, a study investigating hiring practices will have greater power if participants make decisions about many candidates compared to an elaborate scenario with a single dichotomous decision about one candidate. Finally, standardizing procedures and maximizing the fidelity of manipulation and measurement during data collection will increase power.
A complementary approach for doing high‐powered research is collaboration. When a single research group cannot achieve the sample size required to provide sufficient statistical power, multiple groups can administer the same study materials, and then combine data. For example, the first “Many Labs” replication project administered the same study across 36 samples, totaling more than 6,000 participants, producing both extremely high‐powered tests of the effects and sufficient data to test for variability across samples and settings (Klein et al., 2014). Likewise, large‐scale collaborative consortia in fields such as human genetic epidemiology have transformed the reliability of findings in these fields (Austin, Hair, & Fullerton, 2012). Even just combining efforts across three or four labs can increase power dramatically while minimizing the labor and resource impact on any one contributor. Moreover, concerns about project leadership opportunities for publishing can be minimized with quid pro quo agreements – “you run my study, I’ll run yours.”

Create an analysis plan

Researchers have many decisions to make when conducting a study and analyzing data. Which data points should be excluded? Which conditions and outco...

Table of contents

  1. Cover
  2. Title Page
  3. Table of Contents
  4. List of Contributors
  5. Introduction
  6. Part I: Cross‐Cutting Challenges to Psychological Science
  7. Part II: Domain‐Specific Challenges to Psychological Science
  8. Part III: Psychological and Institutional Obstacles to High‐Quality Psychological Science
  9. AfterwordCrisis? What Crisis?
  10. Index
  11. End User License Agreement