Research Interviewing
eBook - ePub

Research Interviewing

Context and Narrative

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Research Interviewing

Context and Narrative

About this book

Interviews hold a prominent place among the various research methods in the social and behavioral sciences. This book presents a powerful critique of current views and techniques, and proposes a new approach to interviewing. At the heart of Elliot Mishler's argument is the notion that an interview is a type of discourse, a speech event: it is a joint product, shaped and organized by asking and answering questions.

This view may seem self-evident, yet it does not guide most interview research. In the mainstream tradition, the discourse is suppressed. Questions and answers are regarded as analogues to stimuli and responses rather than as forms of speech; questions and the interviewer's behavior are standardized so that all respondents will receive the same "stimulus"; respondents' social and personal contexts of meaning are ignored. While many researchers now recognize that context must be taken into account, the question of how to do so effectively has not been resolved. This important book illustrates how to implement practical alternatives to standard interviewing methods.

Drawing on current work in sociolinguistics as well as on his own extensive experience conducting interviews, Mishler shows how interviews can be analyzed and interpreted as narrative accounts. He places interviewing in a sociocultural context and examines the effects on respondents of different types of interviewing practice. The respondents themselves, he believes, should be granted a more extensive role as participants and collaborators in the research process.

The book is an elegant work of synthesis—clearly and persuasively written, and supported by concrete examples of both standard interviewing and alternative methods. It will be of interest to both scholars and clinicians in all the various fields for which the interview is an essential tool.

image
1
image

Standard Practice

The way that our everyday, ordinary practice of asking and answering questions has been formalized into a research method is illustrated in standard definitions of interviewing found in textbooks and manuals. In this chapter, as background to developing an alternative approach, I examine the assumptions and implications of these definitions and focus on how the standard view of interviewing constrains research to “merely” technical issues and obscures the central problem of discourse.1
In a widely cited review, Maccoby and Maccoby (1954, p. 449) offer the following definition: “For our purposes, an interview will refer to a face-to-face verbal interchange, in which one person, the interviewer, attempts to elicit information or expressions of opinion or belief from another person or persons.” A similar definition is found in Kahn and Cannell’s (1957, p. 16) influential text: “We use the term interview to refer to a specialized pattern of verbal interaction—initiated for a specific purpose, and focused on some specific content area, with consequent elimination of extraneous material. Moreover, the interview is a pattern of interaction in which the role relationship of interviewer and respondent is highly specialized, its specific characteristics depending somewhat on the purpose and character of the interview.”
Any assertion about uniformity of approach must be advanced with caution. Nonetheless these definitions appear to be widely accepted among investigators, as is evident from examination of studies based on interviews as well as of research on problems of interviewing, even when definitions either are more casual than those cited above or are left implicit. Schuman and Presser (1981, p. 1), for example, in reporting their studies of effects on responses of question wording and question order, do not provide a specific definition but refer in passing to the survey interview as combining sampling methods with “the ancient but extremely efficient method of obtaining information from people by asking questions.” Sometimes the definition is even more oblique or indirect, as in Kidder’s (1981) revision of a standard text on methods. Kidder makes little distinction between questionnaire and interview and notes that in both “heavy reliance is placed on verbal reports from the subjects for information about the stimuli or experiences to which they are exposed and for knowledge of their behavior” (p. 146). And sometimes a definition is omitted even where it might be expected, as in the Interviewer’s Manual of the Survey Research Center (1976) at the University of Michigan, which includes extensive discussion of problems and much advice on how to conduct “them” but presents no explicit definition of interviews.
These instances of indirectness and implicitness presume that we all “know” what an interview is, at least if we are members of the research community, and that although there may still be technical problems interviewing is essentially nonproblematic as a method. Within this context of a taken-for-granted understanding, analyses and discussions of the interviewing method reveal the same assumptions that may more clearly be discerned in the explicit definitions cited earlier.
The first assumption is that an interview is a behavioral rather than a linguistic event. The definitions refer to an interview not as speech, or talk, or even communication, but as a “verbal exchange,” a “pattern of verbal interaction,” or a “verbal report.” In this way the definitions erase and remove from consideration the primary and distinctive characteristic of an interview as discourse, that is, as meaningful speech between interviewer and interviewee as speakers of a shared language. The difference between a conception of interviewing as a form of talk and a concept “verbal interchange” or “verbal interaction” is far from trivial. It marks radically different understandings of the nature of the interview, of its special qualities, and of its problems.
Talk and behavior, as key alternative terms for conceptualizing interviews as well as other types of human action and experience, contrast with each other in highly significant ways.2 Situations and forms of talk have structures—that is, forms of systematic organization—that reflect the operation of several types of normative rules—for example, rules of syntax, semantics, and pragmatics, to use a familiar scheme. As is true of other culturally grounded norms, these rules guide how individuals enter into situations, define and frame their sense of what is appropriate or inappropriate to say, and provide the basis for their understandings of what is said. This view of talk applies specifically in interviews, as we shall see later, to both interviewers’ and respondents’ understandings of the meaning and intent of questions and responses. Units of behavior, on the other hand, are arbitrary and fragmented and become connected and related to one another not through higher-order rules but through a history of past associations and reinforcements that varies from person to person. This view allows, and indeed encourages, interviewers and analysts to treat each question-answer pair as an isolated exchange.
The standard conception of interviewing as behavior, albeit verbal behavior, excludes explicit recognition of the cultural patterning of situationally relevant talk. The behavioral definition removes from consideration, in the analysis and interpretation of interviews, the normatively grounded and culturally shared understandings of interviews as particular types of speech situations. In turn, the consequent decontextualizing of questions and responses leads to a variety of problems in the analysis and interpretation of interview data. These problems are viewed as “technical,” that is, as problems that can be “solved” through more precise and rigorous methods. They may usefully be thought of as research iatrogenic, generated by the behavioral approach itself rather than inherent in the interview. They result from the assumptions of the behavioral approach to interviewing, not from problems faced by all individuals in talking with and understanding one another. The problems include, for example, variation across interviewers, unreliability of coding, and the ambiguities and possible spuriousness of relationships among variables. Typical efforts to deal with them include, respectively, systematic interviewer training programs, elaborate coding manuals, and complex multivariate statistical analyses.
I am not mounting an argument against rigor and precision in research. Sophisticated, technical methods are integral to any scientific study. I am proposing, however, that the widespread view of interviews as behavioral events leads to the definition of certain problems as technical when the problem goes much deeper. Technical solutions are applied unreflectively, they become routine practice, and the presuppositions that underlie the approach remain unexamined. The sense of precision provided by these methods is illusory because they tend to obscure rather than illuminate the central problem in the interpretation of interviews, namely, the relationship between discourse and meaning.
One consequence of the behavioral approach is the almost total neglect by interview researchers of work by students of language on the rules, forms, and functions of questions and responses. There exists a respectable and instructive body of theoretical and empirical work on these topics by philosophers of language, linguists, sociolinguists, anthropologists, and sociologists. Dillon (1981), for example, recently compiled a preliminary bibliography of over two hundred articles on questioning as a form of speech, putting particular emphasis on studies in education and on the interactive functions of questions. His list includes only a handful of reports from the extensive literature in survey and opinion research, and in turn this literature, which focuses on different problems, rarely refers to work on questioning in linguistics and sociolinguistics.
Interest in this topic has grown over the past decade and a number of social scientists have explored linguistic and conversational rules that apply to questioning and answering in naturally occurring conversation. Goffman (1976), for example, examines linguistic and social constraints in conversation and the differences between replies and responses. Labov and Fanshel (1977) elaborate a formal set of rules for legitimate requests and their variants, with questions as one type of request. Mishler (1975a,b, 1978) shows systematic regularities in successive chains of questions and answers. Schegloff and Sacks (1973) and Sacks, Schegloff, and Jefferson (1974) develop the concept of adjacency pairs for the situation where a second speaker’s utterances are tied to and contingent in particular ways to a first speaker’s utterances, a conversational structure of which questions and answers are one important subtype. Briggs (1983, 1984) and Frake (1964, 1977) discuss the uses and problems of formal questioning procedures in ethnographic field research in other cultures.
This brief and noninclusive list is intended only to document the generalization made above that there is a serious and substantial tradition of theory and research on questions and answers, the central and distinctive feature of interviews, that is not represented in the dominant approach to interview research. Except for the few reports on survey research noted by each of them, there is an almost total lack of overlap between Dillon’s (1981) bibliography and the extensive bibliographies included in recent books summarizing studies of questions and answers in survey interviews by Dijkstra and van der Zouwen (1982) and by Schuman and Presser (1981). The relatively total neglect of linguistically oriented theoretical and empirical work on questions and answers by investigators in the survey research tradition directly reflects the definition adopted by the latter of the interview as a behavioral event, as a verbal interchange, rather than as a speech event—that is, as discourse.
A second assumption of the standard approach in interview research, closely linked to its behavioral bias, is its reliance on the stimulus-response paradigm of the experimental laboratory for conceptualization of the interview process and, consequently, for specification of issues for research. Brenner (1982, p. 131) explicitly invokes this model as a research framework in his review of studies of the “role” of interviewers and the “rules” of interviewing: “It is useful, if only heuristically, to think of the question-answer process in the survey interview in stimulus-response terms . . . The stimulus-response analogy is useful because the only objective of survey interviewing consists in obtaining respondents’ verbal reactions to the questions put to them, these meeting particular response requirements posed by the questions.” By specifying the objective as obtaining “verbal reactions,” Brenner makes explicit the connection between the stimulus-response model of interviewing and the behavioristic assumption. Brenner then draws implications from this analogy:
Attempts to implement the stimulus-response analogy, in as much as is possible, require, first the standardization of the questionnaire to be used in the interviews. In order to maximize the effect of the questions qua stimuli, it is also necessary to try to ensure that the interviewing techniques used do not affect the answering process other than in terms of facilitating the accomplishment of, in measurement terms, adequate responses—that is, answers which are contingent upon the questions alone . . . Also, in order to achieve reliability and precision in the ways in which interviews are conducted (both are prerequisites for assuming the equivalence of interviews in terms of interviewer-respondent interaction), the interviewing techniques must be determined, and standardized, before the data collection commences. (pp. 131–132)
By and large, research on problems of the interview has been framed within the stimulus-response paradigm, implicit reliance on its assumptions guiding the general direction of inquiry and generating the specific questions for study. The primary aim of this research and of recommendations for practice based on it is to ensure, in accord with Brenner’s prescription, the “equivalence of interviews in terms of interviewer-respondent interaction.” Because the “stimulus” is a compound one, consisting in interviewer plus question, it is not surprising to find the majority of studies directed to two general questions: How are respondents’ answers influenced by the form and wording of questions? and How are they influenced by interviewer characteristics?
The intent of these studies is to find ways to standardize the stimulus or, perhaps a better term, to neutralize it, so that responses may be interpreted clearly and unequivocally. That is, the aim is to ascertain respondents’ “true” opinions and to minimize possible distortions and biases in responses that may result from question or interviewer variables that interfere with respondents’ abilities or wishes to express their “real” or “true” views. Such potentially confounding variables include, for example, whether a question is phrased in negative or positive terms, the number and placement of alternative response categories, the sequential order of questions, and particular social attributes, expectations, or attitudes of interviewers.
Dijkstra and van der Zouwen (1982, p. 3), who refer to this as the general problem of “response effects,” note that the central concern of interview research is with “distortions because of the effects of improper variables, that is, variables other than the respondent’s opinion, etc. that the researcher is interested in.” In a similar vein Hagenaars and Heinen (1982, p. 92), reviewing studies of the effects on responses of selected interviewer social characteristics, state that “the main feature of the registered response that will be of interest is response bias: the difference between the registered score and the true score.”
This is not the place to detail the findings of a large number of studies; several recent reviews serve this purpose, for example, Cannell, Miller, and Oksenberg (1981), the papers in Dijkstra and van der Zouwen (1982), and the monograph by Schuman and Presser (1981). However, it is germane to my argument to assess in broad terms the net result of this line of investigative effort. The following generalization is warranted, I believe, as a statement of the level of understanding that has been achieved regarding the effects of interviewer and question variables: some variables, and perhaps all of them, have some effects on some, and perhaps all, types of response under some conditions. Or, restated in somewhat different terms: each stimulus variable studied may influence some feature(s) of a response, the magnitude and seriousness of the effect being a function of various contextual factors.
This is a disturbing conclusion, all the more so because such a statement could have been made prior to undertaking the studies. Further, the conclusion and the findings that it reflects have no practical implications for the design of any particular study because the possible relationships between stimulus and response variables have to be determined separately in each instance.
I am aware that this is a harsh and sweeping generalization. It may be mitigated to some extent by the observation that many investigators arrive at a similar conclusion, although they often place it in the more positive context of the evident need for future research. This mixture of criticism and hopefulness is expressed clearly by Presser (1983) in his recent essay review of three books on survey research methodology and practice, including the Dijkstra and van der Zouwen (1982) collection cited here. Presser, coauthor of another major study (Schuman and Presser, 1981), retains a more optimistic view than I do about the potential value of survey research, but his comments are in full agreement with the argument I have advanced here.
It is striking, though, how little influenced most survey practice is by this accumulated knowledge. The typical survey is conducted in ignorance or disregard of methodological findings . . . To begin with, methodological research sometimes produces conflicting findings or findings difficult to interpret. This is true, for instance, of studies of the differences between agree-disagree and forced-choice question formats ... In many other areas, data-collection issues have not been subjected to much systematic inquiry . . . Finally, methodological research sometimes produces results that have no clear implication for practice . . . meaning ... is affected by the order of the questions ... as with many other demonstrations of context effects, it points to the importance of contexts, but not to any practical guide for ordering survey items. (pp. 637–638)
Beza (1984), in an essay review of three different books reporting findings of within-survey experiments on such problems as question order and question form, including the Schuman and Presser (1981) study discussed below, arrives at a conclusion that echoes my own and Presser’s about the limited value of such studies for research practice: “Perhaps the most important conclusion to be drawn from the three books is that the answers to questions often depend on question form and respondent understanding. Consequently, investigators interested in assessing the impact of question form and respondent understanding need to conduct their own experiments within surveys” (p. 37).
Given the extent and seriousness of these problems—the ambiguity and often contradictory nature of findings from methodological studies and the lack of any general guidelines that would apply across different studies—we can more easily understand why research reports and review essays are pervaded by “on the one hand, on the other hand” locutions, why caution is expressed about drawing firm conclusions or overgeneralizing from the data, and why interpretations are wrapped in layers of qualifications. Thus, DeLamater (1982), summarizing findings on the effects of variations in the wording of questions directed to the same topic, remarks: “It may be incorrect to think that it is possible to have alternative wordings of the ‘same’ item. Any change in wording can change the meaning of the question. Whether two items are equivalent should be treated as a question to be answered analytically, using techniques such as interitem correlations, factor analyses, and analyses which focus on substantive relationships involving each item” (p. 23). Noting the absence of “systematic” effects, that is, general effects that hold across surveys and content areas, he points to the significance of contextual relationships: “The available research does not find systematic effects of either interviewer or respondent characteristics. When such person variables are related to responses, it is primarily in interaction with particular types of questions or characteristics of the data collection situation” (p. 38).
Molenaar (1982) concludes in a similar vein regarding variation in question wording: “Moreover, hardly any experiment gives a decisive answer as to which of the question-wordings involved is more valid. Thus, also the direct practical utility of any generalizing statement may be said to be fairly restricted, in that it does not constitute practical guidelines for framing questions” (p. 51). Reviewing the effects of differences in the form of response alternatives, Molenaar asserts: “The effects, however, will vary with the content of the questions and with the nature of the added contrasting alternative(s).” Similarly, with regard to the effects of directive as compared with nondirective questions, he states: “the effects of directive question-forms on the responses, . . . seem to be dependent for example, on characteristics of the respondents, the content and the context of the question concerned” (p. 70).
These citations could easily be multiplied, but it may be more useful to consider in some detail a particular example of a topic regarding which “data collection issues have not been subjected to much systematic inquiry” (Presser, 1983, p. 637). Brenner (1982) conducted one of the few studies that directly examines, through the analysis of tape-recorded interviews, whether interviewers...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Preface
  5. Contents
  6. Introduction Problems of the Research Interview
  7. 1. Standard Practice
  8. 2. Research Interviews as Speech Events
  9. 3. The Joint Construction of Meaning
  10. 4. Language, Meaning, and Narrative Analysis
  11. 5. Meaning in Context and the Empowerment of Respondents
  12. Conclusion Prospects for Critical Research
  13. Appendix Suggested Readings in Narrative Analysis
  14. Notes
  15. References
  16. Index

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Research Interviewing by Elliot G. Mishler in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.