Experimental Methods in Survey Research
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

About this book

A thorough and comprehensive guide to the theoretical, practical, and methodological approaches used in survey experiments across disciplines such as political science, health sciences, sociology, economics, psychology, and marketing

This book explores and explains the broad range of experimental designs embedded in surveys that use both probability and non-probability samples. It approaches the usage of survey-based experiments with a Total Survey Error (TSE) perspective, which provides insight on the strengths and weaknesses of the techniques used.

Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment addresses experiments on within-unit coverage, reducing nonresponse, question and questionnaire design, minimizing interview measurement bias, using adaptive design, trend data, vignettes, the analysis of data from survey experiments, and other topics, across social, behavioral, and marketing science domains.

Each chapter begins with a description of the experimental method or application and its importance, followed by reference to relevant literature. At least one detailed original experimental case study then follows to illustrate the experimental method's deployment, implementation, and analysis from a TSE perspective. The chapters conclude with theoretical and practical implications on the usage of the experimental method addressed. In summary, this book:

  • Fills a gap in the current literature by successfully combining the subjects of survey methodology and experimental methodology in an effort to maximize both internal validity and external validity
  • Offers a wide range of types of experimentation in survey research with in-depth attention to their various methodologies and applications
  • Is edited by internationally recognized experts in the field of survey research/methodology and in the usage of survey-based experimentation —featuring contributions from across a variety of disciplines in the social and behavioral sciences
  • Presents advances in the field of survey experiments, as well as relevant references in each chapter for further study
  • Includes more than 20 types of original experiments carried out within probability sample surveys
  • Addresses myriad practical and operational aspects for designing, implementing, and analyzing survey-based experiments by using a Total Survey Error perspective to address the strengths and weaknesses of each experimental technique and method

Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment is an ideal reference for survey researchers and practitioners in areas such political science, health sciences, sociology, economics, psychology, public policy, data collection, data science, and marketing. It is also a very useful textbook for graduate-level courses on survey experiments and survey methodology.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Experimental Methods in Survey Research by Paul J. Lavrakas, Michael W. Traugott, Courtney Kennedy, Allyson L. Holbrook, Edith D. de Leeuw, Brady T. West, Paul J. Lavrakas,Michael W. Traugott,Courtney Kennedy,Allyson L. Holbrook,Edith D. de Leeuw,Brady T. West in PDF and/or ePUB format, as well as other popular books in Social Sciences & Social Science Research & Methodology. We have over one million books available in our catalogue for you to explore.

Information

1
Probability Survey‐Based Experimentation and the Balancing of Internal and External Validity Concerns1

Paul J. Lavrakas 1, Courtney Kennedy 2, Edith D. de Leeuw 3, Brady T. West 4, Allyson L. Holbrook 5, and Michael W. Traugott
1 NORC, University of Chicago, 55 East Monroe Street, Chicago, IL 60603, USA
2 Pew Research Center, Washington, DC, USA
3 Department of Methodology & Statistics, Utrecht University, Utrecht, the Netherlands
4 Survey Research Center, Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
5 Departments of Public Administration and Psychology and the Survey Research Laboratory, University of Illinois at Chicago, Chicago, IL, USA
6 Center for Political Studies, Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
The use of experimental designs is1 an extremely powerful scientific methodology for directly testing casual relationships among variables. Survey researchers reading this book will find that they have much to gain from taking greater advantage of controlled experimentation with random assignment of sampled cases to different experimental conditions. Experiments that are embedded within probability‐based survey samples make for a particularly valuable research method as they combine the ability to more confidently draw causal attributions based on a true experimental design that is used with the ability to generalize the results of an experiment with a known degree of confidence to the target population which the survey has sampled (cf. Fienberg and Tanur 1987, 1989, 1996). For example, starting in the late 1980s, the rapid acceptance of using technology to gather data via computer‐assisted telephone interviewing (CATI), computer‐assisted personal interviewing (CAPI), and computer‐assisted web interviewing (CAWI) has made it operationally easy for researchers to embed experimental designs within their surveys.
Prior to the 1990s, the history of experimentation in the social sciences, especially within psychology, essentially reflected a primary concern for maximizing the ability to identify empirically based cause‐and‐effect relationships – ones with strong internal validity (Campbell and Stanley 1966) – with little regard for whether the study could be generalized with confidence – i.e. to what extent the study had strong external validity (Campbell and Stanley 1966) – beyond the particular group of subjects/respondents that participated in the experiment. This latter concern remains highly pertinent with the current “replication crisis” facing the health, social, and behavioral sciences and its focus on “reproducibility” as a contemporary criterion of good experimentation (Wikipedia 2018a,b).
This is not to say that prior to 1990 that all experimental social scientists were unconcerned about external validity (cf. Orwin and Boruch 1982), but rather that the research practices of most suggested that they were not. In contrast, practical experience within the field of survey research suggests that many survey researchers have focused on external validity, but failed to use experimental designs to enhance their research aims by increasing the internal validity of their studies. An example of this is in the realm of the political polling that is done by and for news media organizations. Far too often, political polls merely generate point estimates (e.g. 23% of the public approves of the job that the President is doing) without investigating what drives the approval and disapproval through the usage of question wording experiments (cf. Traugott and Lavrakas 2016).
A case in point: The results of a nonexperimental preelection study on the effects of political advertising were posted on the list‐server of the American Association for Public Opinion Research (AAPOR) in 2000. This survey‐based research was conducted via the Internet and reported finding that a certain type of advertising was more persuasive to potential voters than another type. By using the Internet as the data collection mode, this survey was able to display the ads – which were presented as digitized video segments – in real‐time to respondents/subjects as part of the data collection process and, thereby, simulate the televised messages to which voters routinely are exposed in an election campaign. Respondents were shown all of the ads and then asked to provide answers to various questions concerning their reactions to each type of ad and its influence on their voting intentions. This was done in the individual respondent's own home in a room where the respondent normally would be watching television. Here, the Internet was used in a very effective and creative way to provide mundane realism 2 (and there by contribute to “ecological validity”) to the research study by having survey respondents react to ads in a context quite similar to one in which they would be exposed to real political ads while they were enjoying a typical evening at home viewing television. Unlike the many social science research studies that are conducted under conditions far removed from “real life,” this study went a long way toward eliminating the potential artificiality of the research environment as a serious threat to its overall validity.
Another laudable design feature of this study was that the Internet sample of respondents was chosen with a rigorous scientific sampling scheme so that it could reasonably be said to represent the population of potential American voters. The sample came from a large, randomly selected panel of U.S. households that had received Internet technology (WebTV) in their homes, allowing the researchers to survey subsets of the sample at various times. Unlike most social science research studies that have studied the effects of political advertising by showing the ads in a research laboratory setting (e.g. a centralized research facility on a university campus), the validity of this study was not threatened by the typical convenience sample (e.g. undergraduates “volunteering” to earn course credit) that researchers often rely upon to gather data. Thus, the results of this Internet research were based on a probability sample of U.S. households and, thereby, could reasonably be generalized to the potential U.S. electorate.
As impressive as these features of this research design were, the design had a serious, yet unnecessary, methodological limitation – one that caused it to miss a golden opportunity to add considerably to the overall validity of the conclusions that could have been drawn from its findings. The research design that was used displayed all the political ads to each respondent, one ad at a time. There were no features built into the design that controlled either for the possible effects of the order in which the respondent saw the ads or for having each respondent react to more than one ad within the same data collection session. As such, the cause‐and‐effect conclusions that could be drawn from this nonexperimental study design about which ads “caused” stronger respondent reactions rested on very weak footing. Since no design feature was used to control for the fact that respondents viewed multiple ads within the same data collection session, the validity of the conclusions drawn about the causality underlying the results remained little more than speculations on the part of the researchers, because such factors as the order of the ads and number of ads were not varied in a controlled manner by the researchers. Unfortunately, this missed opportunity is all too common in many survey‐based research studies in the social sciences.
This study could have lent itself to the use of various experimental designs whereby a different political ad (i.e. the experimental stimuli) or different subsets of ads could have been randomly assigned to different subsamples of respondents. Or, the order of the presentation of the entire set of political ads could have been randomly assigned across respondents. In either case, an experimental design with random assignment would have provided the researchers with a far stronger basis (i.e. a study with greater internal validity) from which to draw causal inferences. Furthermore, such an experimental approach would have had little or no cost implications for the research budget, and under a design where one and only one ad was shown to any one respondent, would likely have saved data collection costs.
Using this example as our springboard, it is the goal of this book to explain how experimental designs can and should be deployed more often in survey research. It is most likely through the use of a true experimental design, with the random assignment of subjects/respondents to experimental conditions, that researchers gain the strong empirical basis from which they can make confident statements about causality. Furthermore, because of the widespread use of computer‐assisted technologies in survey research, the use of a true experiment within a survey often adds little or no cost at all to the budget. In addition, embedding an experiment into a survey most often provides the advantage that the cell sizes of the different experimental groups can be much larger than in traditional experimental designs (e.g. in social psychology), which has the benefit that...

Table of contents

  1. Cover
  2. Table of Contents
  3. List of Contributors
  4. Preface by Dr. Judith Tanur
  5. About the Companion Website
  6. 1 Probability Survey-Based Experimentation and the Balancing of Internal and External Validity Concerns1
  7. Part I: Introduction to Section on Within-Unit Coverage
  8. Part II: Survey Experiments with Techniques to Reduce Nonresponse
  9. Part III: Overview of the Section on the Questionnaire
  10. Part IV: Introduction to Section on Interviewers
  11. Part V: Introduction to Section on Adaptive Design
  12. Part VI: Introduction to Section on Special Surveys
  13. Part VII: Introduction to Section on Trend Data
  14. Part VIII: Vignette Experiments in Surveys
  15. Part IX: Introduction to Section on Analysis
  16. Index
  17. End User License Agreement