Part 1
GENERAL CONSIDERATIONS
Research Design, Ethics, and the Role of the Researcher
1
ALIGNING SAMPLING STRATEGIES WITH ANALYTIC GOALS
Julia F. Lynch
In political science, information gleaned from interviews can serve a number of purposes, depending on the stage in the research process, the goals of the research, and external constraints on the amount or type of interviews we can do. Interviews can be undertaken as a preliminary to the main study, as the main source of data for a study, or as one component in a multi-method research project. Interviews may be used to generate data or metadata, to test descriptive or causal hypotheses, to enhance the validity or reliability of our measures, or as a source of illustrative material that enlivens our analyses and makes our writing more enjoyable and accessible. Each of these uses of interview research suggests a different set of requirements for selecting people to interview (and sometimes how to interview them). In turn, the choices we make about sampling have implications for the role that interview data can play in our analyses and in the larger enterprise of theory building in political science. The aim of this chapter is to develop a set of guidelines that will help researchers align sampling strategies with analytic goals in interview-based research.
How should interview researchers sample their respondents, particularly if they hope to use interviews as a part of a larger multi-method research agenda? One argument runs that because random sampling is required to generate unbiased descriptive and causal inferences about larger populations of people, organizations, or events, ârealâ data from interviews can only come when there is random sampling. Some authors argue that epistemological underpinnings of arguments about the value of in-depth data derived from interviews are at the very least incommensurate with the requirements of large-n research (Ahmed and Sil 2009; Beck 2009). Hence data derived from non-randomly selected interviews do nothing to enhance the validity of claims based on statistical analysis of aggregate-level data, and multi-method âtriangulationâ using such interview data isnât worth much more than the paper the interview transcripts are printed on.
To be sure, studies that make claims about population characteristics based on convenience samples should be approached with skepticism. And when interview data are used as window dressing, there is often a temptation to select only quotations that are supportive of the overall argument of the analysis, or to anoint non-randomly selected respondents as âtypical.â These practices may enliven an otherwise dry research narrative but cannot be considered multimethod research because they do not enhance the validity or reliability of claims generated using other methods.
However, even interviews with non-random samples of individuals (or of individuals associated with non-random samples of organizations and events) can add to our store of knowledge, and to multi-method research. For example, interviews conducted as a precursor to survey work can aid in the creation of more-reliable measures used in large-n studies. Case study interviews may add meat to large-n causal arguments by using causal process observations to generate Bayesian updates about what is happening and why at a given point in a causal chain or process (J. Mahoney 2009). Purposive or quota samples may be good enough in many cases to verify relationships first observed and validated using other methods. Insights drawn from in-depth research with non-randomly selected respondents may also generate relational, meta-level information about the society or organization in which they are embeddedâinformation that is simply unobtainable any other way. For all these reasons, even non-random-sampling designs for interview research can enhance multi-method research. And interviews of randomly selected individuals can, when conducted and analyzed with rigor, contribute data that are ideal for integration with other forms of data in multi-method research.
Most political scientists who use, or plan to use, interview data in their work are familiar with at least one or two works whose findings hinge on data drawn from in-depth in-person interviews. In American politics, for example, Robert Laneâs Political Ideology (1962), Jennifer Hochschildâs Whatâs Fair (1981), and Richard Fennoâs Home Style (1978) are three classic works that place interview data at center stage. Laneâs book, subtitled âWhy the Common Man Believes What He Does,â draws on a small number of in-depth interviews with non-elites to explore the roots of political views in the mass public. Hochschild conducted in-depth, semi-structured interviews with a larger number of non-elitesâtwenty-eight residents of New Haven, Connecticutâto understand how they thought about justice and fairness in a variety of domains of life (the economy, politics, and the social domain encompassing family, friends, and schooling). Fennoâs interviews with eighteen members of Congress as they went about their daily routines in their home districts allowed him to understand how elected officialsâ views of their constituencies affect their political behavior. Interviews need not, of course, be the only or even main source of data for a research project. Interviews can be equally useful playing a supporting or costarring role. Deciding how to use interview data and figuring out whom to interview are both important decisions that need to be made with an eye to the role the interview data will play in the larger research agenda.1
For the purposes of this chapter, I argue from a positivist worldview: in other words, I assume that researchers will be using interview data in the service of a research agenda that ultimately aims to frame and test hypotheses about the political world. My focus on sampling and the related problems of inference derives from this epistemological position. It is worth noting, however, that many political scientists who use interview research take a different approach. Scholars working in a constructivist or interpretivist vein are more likely to view the information that comes out of an interview as discursively constructed and hence unique to the particular interaction among interviewer, interviewee, and interview context. When viewed from this perspective, the central methodological issue of interview research is not so much sampling in order to facilitate generalization, but rather interpreting the data from a given interview in light of the interactions that produced it. (Of course, positivists who look to interviews to provide âevidenceâ should pay at least as much attention as interpretivists do to the quality and characteristics of data produced in the interview setting. Many of the chapters in this volume treat this topic in more detail.)
The next section of this chapter explores some of the different ways that interview research can be used to contribute to a positivist political science research agenda. The subsequent section discusses alternative sampling techniques, with an eye to understanding the analytic leverage that these different techniques offer and how this leverage can be used in the pursuit of specific analytic goals. The conclusion brings us back to ground level with a discussion of practical constraints that may hinder researchersâ attempts to create optimal linkages between sampling strategies and research goals. A central message of the chapter is that the sampling methods researchers employ in their interview research are critical in determining whether and how interview data can be used to enhance the validity of interview-based and multi-method research.
Interviews and the Research Process
Interviews can be used productively in the service of a variety of different research goals, and at a variety of stages in the research process. The following examples are organized chronologically around the stage of research, and within that according to the analytic goals of the research.
Using Interviews in Preliminary Research
Preliminary research is research that occurs before collection of the data on which the main descriptive or causal hypotheses of a study will be tested. Interviews can be a valuable source of information in preliminary research, whether or not the main research project will use interview data.
In case study-based research, interviews at the pre-dissertation or scoping-out-a-new-project stage can use process-t racing questions to identify fruitful (and fruitless) avenues of research. Talking to people is often quicker than archival research for figuring out what happened when, who was involved, what were the important decisions, or where documentary materials related to your research question may be found. This type of preliminary interviewing is one method for quickly generating and testing in a ârough-and-readyâ way a number of alternative hypotheses about a particular case study or case studies (Gerring 2007, chap. 3). Using preliminary interviews to get the lay of the land aids the purposive selection of cases for small-n studies, since some hypotheses have already been identified as irrelevant or, alternatively, in need of further testing.
Interviews also can be used (and often should be used) in advance of conducting a survey or behavioral experiment. In-depth interviews help the researcher get a sense of the opinions, outlooks, or cognitive maps of people who are similar to the research subjects who will eventually take part in the study. Interviews can help determine what questions are relevant and the appropriate range of response options (see e.g. Gallagher, this volume, chapter 9). Even if the researcher is fairly certain of the content of the questions she would like to ask or the games she would like her subjects to play, pretesting in a setting that allows for instant feedback from the respondent can help fine-tune question wording, question ordering, or visual prompts.
We have seen so far that preliminary interviews are often particularly useful because they allow us to refine our concepts and measures before embarking on a major research project. But interviews also can be an essential precursor to larger research projects when they are used to establish the sampling frame for a random sample or to figure out which characteristics to select for in a purposive sample. We will talk more about these types of sampling in the next section. What is important for the moment is that preliminary research is very often necessary before we can draw a sample, particularly if the aim is eventually to make inferences beyond the elements in your sample.
In some research contexts, a preexisting sampling frame may be easy to come by. For example, one could easily sample elected officials in Italian regions (Putnam 1993), or issues on which registered lobbyists have been active in the United States (Baumgartner et al. 2009). In other research contexts, however, official lists may be biased in ways that preclude representative sampling. For example, identifying the population of small-business owners in Lima, Peru, or Calcutta, India, based on the official tax rolls would exclude large numbers of informal entrepreneurs. Conducting interviews with both formal and informal entrepreneurs to identify all the business owners active in a particular area of the city or sector of the local economy could be necessary in order to establish a complete sampling frame and allow for truly random sampling of the population of interest. In still other research contextsâfor example, for a study of squatter settlements, undocumented migrants, or victims of ethnic cleansingâthere maybe no written lists available at all, and preliminary research might be needed to establish the boundaries of the population of interest.
While it is likely to be time-consuming, doing preliminary interviews in order to establish the universe of relevant cases for a research project can have positive side effects. It is for good reason that collaborative mapping and census-taking are two standard âentryâ strategies for ethnographic researchers (MacLean 2010). Talking to the people who live or work in the area in which we plan to do our research not only allows us to generate a comprehensive list of potential respondents, but also to get started establishing the rapport that will facilitate data-collection efforts as we move into the main part of our research (see MacLean, this volume, chapter 3).
Using Interviews in the Main Study
Interviews are frequently used to generate data to test central descriptive and causal hypotheses in political science research. Framing interview work in this way may make it sound little different from survey research.2 But by âgenerating dataâ I do not only mean using tightly structured questionnaires to elicit responses that can be numerically coded and later subjected to statistical analysis. Interviews can generate both overt and latent content, which can be analyzed in a variety of ways.
The overt content of an interview comprises the answers that interviewees articulate to the questions we ask them. For example, a researcher might ask a user of social services or a civic activist, âWhom did you approach about this problem?â âHow many contacts did you have?â âWhat was the response like?â (Note that even when the information itself is qualitative, data like type of contacts or characteristics of the response in the example above can be coded into nominal response categories.) A number of contributors to this volume (Beckmann and Hall, Cammett, Leech et al., Martin) have used semi-structured i nterviews to generate responses that they then coded as data and analyzed statistically.
Direct answers to direct questions may also be analyzed qualitatively, of course. For example, interviews that elicit information about how events unfolded, or who was involved in decision-making and what their goals were, are often primary sources for researchers who use process tracing, pattern matching, and other case-based methods. For example, I used qualitative data from my interviews with policymakers and current and former officials of labor unions and employer organizations in my study of why Italian and Dutch social policies developed with such different age orientations in the post-World War II period (Lynch 2006). This type of overt contentâwhich generates data that can be characterized as âcausal process observationsâ (Brady and Collier 2004, 227-228)âis particularly useful for research into causal mechanisms and has been used fruitfully in historical institutionalist work in comparative politics, international relations, and American politics subfields.3
The overt content of interviews can also be analyzed for recurrent themes, issues, and relationships that respondents raise in the course of answering our questions (see Rogers, this volume, chapter 12). Various forms of qualitative content analysis, done by hand or with the aid of software packages like NVIVO or Atlas.ti, allow us to sift through the data in our interview notes and transcripts to think systematically about the world as our respondents have recounted it to us. (For a useful guide to qualitative content analysis based in grounded theory, see Emerson, Fretz, and Shaw 1995, chap. 6).
Latent content is information we glean from an interview that is not directly articulated by the interviewee in response to our questions. As such, it constitutes a kind of metadata that exists on a plane above the overt content of the respondentâs verbal answers to our questions. Examples of latent content include the length of time respondents take before answering a question, the number of causal connections they make in order to justify a particular response, the way they link ideas together, the things they donât tell us, and even our own observations about the apparent truthfulness of respondents when answering particular questions. Latent content can provide particularly valuable information when we use systematic criteria for recording and analyzing it. For example, Hochs-child (1981) examines the interconnections between ideas in her interview data to create informal cognitive maps that reveal the underpinnings of Americansâ beliefs about justice. Fujiiâs attentiveness to the metaphors her respondents use and the lies they tell allow her to elucidate the social and political context surrounding the Rwandan genocide (Fujii 2010).
Using Interviews in Multi-method Research
Interview data have particular strengths that other forms of data may lack. Well-conducted interviews give access to information about respondentsâ experiences and motivations that may not be available in the public or documentary record; they allow us to understand opinions and thought processes with a granularity that surveys rarely achieve; and they can add microfoundations to events or patterns observed at the macro level. At the same time, the interpersonal nature of the interview experience can raise concerns about the objectivity or reliability of data that come out of that process; and in-depth interviews require a commitment of research resourcesâparticularly timeâthat often makes it infeasible to conduct enough interviews to permit generalization to a larger population. In order to take advantage of the strengths of interview data and mitigate the weaknesses, many researchers use interviews in conjunction with other forms of data to make arguments and test hypotheses.
In some multi-method research, interviews are used in order to triangulate with other methodsâin other words, to bring different forms of data to bear to answer the same question. For example, in my book on the origins of divergent age-orientation of welfare states, I used interviews in conjunction with archival research to fill in blanks in the archival record and uncover the motivations of particular policy actors (Lynch 2006). Others have used interviews to identify and explore the mechanisms underlying findings based on analysis of aggregate-level data, as in Mosleyâs study of the influence of political and economic factors...