Interview Research in Political Science
eBook - ePub

Interview Research in Political Science

  1. 272 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Interview Research in Political Science

About this book

Interviews are a frequent and important part of empirical research in political science, but graduate programs rarely offer discipline-specific training in selecting interviewees, conducting interviews, and using the data thus collected. Interview Research in Political Science addresses this vital need, offering hard-won advice for both graduate students and faculty members. The contributors to this book have worked in a variety of field locations and settings and have interviewed a wide array of informants, from government officials to members of rebel movements and victims of wartime violence, from lobbyists and corporate executives to workers and trade unionists.The authors encourage scholars from all subfields of political science to use interviews in their research, and they provide a set of lessons and tools for doing so. The book addresses how to construct a sample of interviewees; how to collect and report interview data; and how to address ethical considerations and the Institutional Review Board process. Other chapters discuss how to link interview-based evidence with causal claims; how to use proxy interviews or an interpreter to improve access; and how to structure interview questions. A useful appendix contains examples of consent documents, semistructured interview prompts, and interview protocols.Contributors: Frank R. Baumgartner, The University of North Carolina at Chapel Hill; Matthew N. Beckmann, University of California, Irvine; Jeffrey M. Berry, Tufts University; Erik Bleich, Middlebury College; Sarah M. Brooks, The Ohio State University; Melani Cammett, Brown University; Lee Ann Fujii, University of Toronto; Mary Gallagher, University of Michigan; Richard L. Hall, University of Michigan; Marie Hojnacki, Pennsylvania State University; David C. Kimball, University of Missouri, St. Louis; Beth L. Leech, Rutgers, the State University of New Jersey; Julia F. Lynch, University of Pennsylvania; Cathie Jo Martin, Boston University; Lauren Maclean, Indiana University; Layna Mosley, The University of North Carolina at Chapel Hill; Robert Pekkanen, University of Washington; William Reno, Northwestern University; Reuel R. Rogers, Northwestern University

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Interview Research in Political Science by Maria Elayna Mosley in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & Political History & Theory. We have over one million books available in our catalogue for you to explore.

Part 1

GENERAL CONSIDERATIONS

Research Design, Ethics, and the Role of the Researcher

1


ALIGNING SAMPLING STRATEGIES WITH ANALYTIC GOALS

Julia F. Lynch
In political science, information gleaned from interviews can serve a number of purposes, depending on the stage in the research process, the goals of the research, and external constraints on the amount or type of interviews we can do. Interviews can be undertaken as a preliminary to the main study, as the main source of data for a study, or as one component in a multi-method research project. Interviews may be used to generate data or metadata, to test descriptive or causal hypotheses, to enhance the validity or reliability of our measures, or as a source of illustrative material that enlivens our analyses and makes our writing more enjoyable and accessible. Each of these uses of interview research suggests a different set of requirements for selecting people to interview (and sometimes how to interview them). In turn, the choices we make about sampling have implications for the role that interview data can play in our analyses and in the larger enterprise of theory building in political science. The aim of this chapter is to develop a set of guidelines that will help researchers align sampling strategies with analytic goals in interview-based research.
How should interview researchers sample their respondents, particularly if they hope to use interviews as a part of a larger multi-method research agenda? One argument runs that because random sampling is required to generate unbiased descriptive and causal inferences about larger populations of people, organizations, or events, “real” data from interviews can only come when there is random sampling. Some authors argue that epistemological underpinnings of arguments about the value of in-depth data derived from interviews are at the very least incommensurate with the requirements of large-n research (Ahmed and Sil 2009; Beck 2009). Hence data derived from non-randomly selected interviews do nothing to enhance the validity of claims based on statistical analysis of aggregate-level data, and multi-method “triangulation” using such interview data isn’t worth much more than the paper the interview transcripts are printed on.
To be sure, studies that make claims about population characteristics based on convenience samples should be approached with skepticism. And when interview data are used as window dressing, there is often a temptation to select only quotations that are supportive of the overall argument of the analysis, or to anoint non-randomly selected respondents as “typical.” These practices may enliven an otherwise dry research narrative but cannot be considered multimethod research because they do not enhance the validity or reliability of claims generated using other methods.
However, even interviews with non-random samples of individuals (or of individuals associated with non-random samples of organizations and events) can add to our store of knowledge, and to multi-method research. For example, interviews conducted as a precursor to survey work can aid in the creation of more-reliable measures used in large-n studies. Case study interviews may add meat to large-n causal arguments by using causal process observations to generate Bayesian updates about what is happening and why at a given point in a causal chain or process (J. Mahoney 2009). Purposive or quota samples may be good enough in many cases to verify relationships first observed and validated using other methods. Insights drawn from in-depth research with non-randomly selected respondents may also generate relational, meta-level information about the society or organization in which they are embedded—information that is simply unobtainable any other way. For all these reasons, even non-random-sampling designs for interview research can enhance multi-method research. And interviews of randomly selected individuals can, when conducted and analyzed with rigor, contribute data that are ideal for integration with other forms of data in multi-method research.
Most political scientists who use, or plan to use, interview data in their work are familiar with at least one or two works whose findings hinge on data drawn from in-depth in-person interviews. In American politics, for example, Robert Lane’s Political Ideology (1962), Jennifer Hochschild’s What’s Fair (1981), and Richard Fenno’s Home Style (1978) are three classic works that place interview data at center stage. Lane’s book, subtitled “Why the Common Man Believes What He Does,” draws on a small number of in-depth interviews with non-elites to explore the roots of political views in the mass public. Hochschild conducted in-depth, semi-structured interviews with a larger number of non-elites—twenty-eight residents of New Haven, Connecticut—to understand how they thought about justice and fairness in a variety of domains of life (the economy, politics, and the social domain encompassing family, friends, and schooling). Fenno’s interviews with eighteen members of Congress as they went about their daily routines in their home districts allowed him to understand how elected officials’ views of their constituencies affect their political behavior. Interviews need not, of course, be the only or even main source of data for a research project. Interviews can be equally useful playing a supporting or costarring role. Deciding how to use interview data and figuring out whom to interview are both important decisions that need to be made with an eye to the role the interview data will play in the larger research agenda.1
For the purposes of this chapter, I argue from a positivist worldview: in other words, I assume that researchers will be using interview data in the service of a research agenda that ultimately aims to frame and test hypotheses about the political world. My focus on sampling and the related problems of inference derives from this epistemological position. It is worth noting, however, that many political scientists who use interview research take a different approach. Scholars working in a constructivist or interpretivist vein are more likely to view the information that comes out of an interview as discursively constructed and hence unique to the particular interaction among interviewer, interviewee, and interview context. When viewed from this perspective, the central methodological issue of interview research is not so much sampling in order to facilitate generalization, but rather interpreting the data from a given interview in light of the interactions that produced it. (Of course, positivists who look to interviews to provide “evidence” should pay at least as much attention as interpretivists do to the quality and characteristics of data produced in the interview setting. Many of the chapters in this volume treat this topic in more detail.)
The next section of this chapter explores some of the different ways that interview research can be used to contribute to a positivist political science research agenda. The subsequent section discusses alternative sampling techniques, with an eye to understanding the analytic leverage that these different techniques offer and how this leverage can be used in the pursuit of specific analytic goals. The conclusion brings us back to ground level with a discussion of practical constraints that may hinder researchers’ attempts to create optimal linkages between sampling strategies and research goals. A central message of the chapter is that the sampling methods researchers employ in their interview research are critical in determining whether and how interview data can be used to enhance the validity of interview-based and multi-method research.

Interviews and the Research Process

Interviews can be used productively in the service of a variety of different research goals, and at a variety of stages in the research process. The following examples are organized chronologically around the stage of research, and within that according to the analytic goals of the research.
Using Interviews in Preliminary Research
Preliminary research is research that occurs before collection of the data on which the main descriptive or causal hypotheses of a study will be tested. Interviews can be a valuable source of information in preliminary research, whether or not the main research project will use interview data.
In case study-based research, interviews at the pre-dissertation or scoping-out-a-new-project stage can use process-t racing questions to identify fruitful (and fruitless) avenues of research. Talking to people is often quicker than archival research for figuring out what happened when, who was involved, what were the important decisions, or where documentary materials related to your research question may be found. This type of preliminary interviewing is one method for quickly generating and testing in a “rough-and-ready” way a number of alternative hypotheses about a particular case study or case studies (Gerring 2007, chap. 3). Using preliminary interviews to get the lay of the land aids the purposive selection of cases for small-n studies, since some hypotheses have already been identified as irrelevant or, alternatively, in need of further testing.
Interviews also can be used (and often should be used) in advance of conducting a survey or behavioral experiment. In-depth interviews help the researcher get a sense of the opinions, outlooks, or cognitive maps of people who are similar to the research subjects who will eventually take part in the study. Interviews can help determine what questions are relevant and the appropriate range of response options (see e.g. Gallagher, this volume, chapter 9). Even if the researcher is fairly certain of the content of the questions she would like to ask or the games she would like her subjects to play, pretesting in a setting that allows for instant feedback from the respondent can help fine-tune question wording, question ordering, or visual prompts.
We have seen so far that preliminary interviews are often particularly useful because they allow us to refine our concepts and measures before embarking on a major research project. But interviews also can be an essential precursor to larger research projects when they are used to establish the sampling frame for a random sample or to figure out which characteristics to select for in a purposive sample. We will talk more about these types of sampling in the next section. What is important for the moment is that preliminary research is very often necessary before we can draw a sample, particularly if the aim is eventually to make inferences beyond the elements in your sample.
In some research contexts, a preexisting sampling frame may be easy to come by. For example, one could easily sample elected officials in Italian regions (Putnam 1993), or issues on which registered lobbyists have been active in the United States (Baumgartner et al. 2009). In other research contexts, however, official lists may be biased in ways that preclude representative sampling. For example, identifying the population of small-business owners in Lima, Peru, or Calcutta, India, based on the official tax rolls would exclude large numbers of informal entrepreneurs. Conducting interviews with both formal and informal entrepreneurs to identify all the business owners active in a particular area of the city or sector of the local economy could be necessary in order to establish a complete sampling frame and allow for truly random sampling of the population of interest. In still other research contexts—for example, for a study of squatter settlements, undocumented migrants, or victims of ethnic cleansing—there maybe no written lists available at all, and preliminary research might be needed to establish the boundaries of the population of interest.
While it is likely to be time-consuming, doing preliminary interviews in order to establish the universe of relevant cases for a research project can have positive side effects. It is for good reason that collaborative mapping and census-taking are two standard “entry” strategies for ethnographic researchers (MacLean 2010). Talking to the people who live or work in the area in which we plan to do our research not only allows us to generate a comprehensive list of potential respondents, but also to get started establishing the rapport that will facilitate data-collection efforts as we move into the main part of our research (see MacLean, this volume, chapter 3).
Using Interviews in the Main Study
Interviews are frequently used to generate data to test central descriptive and causal hypotheses in political science research. Framing interview work in this way may make it sound little different from survey research.2 But by “generating data” I do not only mean using tightly structured questionnaires to elicit responses that can be numerically coded and later subjected to statistical analysis. Interviews can generate both overt and latent content, which can be analyzed in a variety of ways.
The overt content of an interview comprises the answers that interviewees articulate to the questions we ask them. For example, a researcher might ask a user of social services or a civic activist, “Whom did you approach about this problem?” “How many contacts did you have?” “What was the response like?” (Note that even when the information itself is qualitative, data like type of contacts or characteristics of the response in the example above can be coded into nominal response categories.) A number of contributors to this volume (Beckmann and Hall, Cammett, Leech et al., Martin) have used semi-structured i nterviews to generate responses that they then coded as data and analyzed statistically.
Direct answers to direct questions may also be analyzed qualitatively, of course. For example, interviews that elicit information about how events unfolded, or who was involved in decision-making and what their goals were, are often primary sources for researchers who use process tracing, pattern matching, and other case-based methods. For example, I used qualitative data from my interviews with policymakers and current and former officials of labor unions and employer organizations in my study of why Italian and Dutch social policies developed with such different age orientations in the post-World War II period (Lynch 2006). This type of overt content—which generates data that can be characterized as “causal process observations” (Brady and Collier 2004, 227-228)—is particularly useful for research into causal mechanisms and has been used fruitfully in historical institutionalist work in comparative politics, international relations, and American politics subfields.3
The overt content of interviews can also be analyzed for recurrent themes, issues, and relationships that respondents raise in the course of answering our questions (see Rogers, this volume, chapter 12). Various forms of qualitative content analysis, done by hand or with the aid of software packages like NVIVO or Atlas.ti, allow us to sift through the data in our interview notes and transcripts to think systematically about the world as our respondents have recounted it to us. (For a useful guide to qualitative content analysis based in grounded theory, see Emerson, Fretz, and Shaw 1995, chap. 6).
Latent content is information we glean from an interview that is not directly articulated by the interviewee in response to our questions. As such, it constitutes a kind of metadata that exists on a plane above the overt content of the respondent’s verbal answers to our questions. Examples of latent content include the length of time respondents take before answering a question, the number of causal connections they make in order to justify a particular response, the way they link ideas together, the things they don’t tell us, and even our own observations about the apparent truthfulness of respondents when answering particular questions. Latent content can provide particularly valuable information when we use systematic criteria for recording and analyzing it. For example, Hochs-child (1981) examines the interconnections between ideas in her interview data to create informal cognitive maps that reveal the underpinnings of Americans’ beliefs about justice. Fujii’s attentiveness to the metaphors her respondents use and the lies they tell allow her to elucidate the social and political context surrounding the Rwandan genocide (Fujii 2010).
Using Interviews in Multi-method Research
Interview data have particular strengths that other forms of data may lack. Well-conducted interviews give access to information about respondents’ experiences and motivations that may not be available in the public or documentary record; they allow us to understand opinions and thought processes with a granularity that surveys rarely achieve; and they can add microfoundations to events or patterns observed at the macro level. At the same time, the interpersonal nature of the interview experience can raise concerns about the objectivity or reliability of data that come out of that process; and in-depth interviews require a commitment of research resources—particularly time—that often makes it infeasible to conduct enough interviews to permit generalization to a larger population. In order to take advantage of the strengths of interview data and mitigate the weaknesses, many researchers use interviews in conjunction with other forms of data to make arguments and test hypotheses.
In some multi-method research, interviews are used in order to triangulate with other methods—in other words, to bring different forms of data to bear to answer the same question. For example, in my book on the origins of divergent age-orientation of welfare states, I used interviews in conjunction with archival research to fill in blanks in the archival record and uncover the motivations of particular policy actors (Lynch 2006). Others have used interviews to identify and explore the mechanisms underlying findings based on analysis of aggregate-level data, as in Mosley’s study of the influence of political and economic factors...

Table of contents

  1. Preface
  2. Contributors
  3. Introduction. “Just Talk to People”? Interviews in Contemporary Political Science
  4. Part 1 GENERAL CONSIDERATIONS: RESEARCH DESIGN, ETHICS, AND THE ROLE OF THE RESEARCHER
  5. Part 2 ADDRESSING THE CHALLENGES OF INTERVIEW RESEARCH
  6. Part 3 PUTTING IT ALL TOGETHER: THE VARIED USES OF INTERVIEW DATA
  7. Appendix: Sample Materials for Interview Research
  8. Notes
  9. References