1
Standard Practice
The way that our everyday, ordinary practice of asking and answering questions has been formalized into a research method is illustrated in standard definitions of interviewing found in textbooks and manuals. In this chapter, as background to developing an alternative approach, I examine the assumptions and implications of these definitions and focus on how the standard view of interviewing constrains research to âmerelyâ technical issues and obscures the central problem of discourse.1
In a widely cited review, Maccoby and Maccoby (1954, p. 449) offer the following definition: âFor our purposes, an interview will refer to a face-to-face verbal interchange, in which one person, the interviewer, attempts to elicit information or expressions of opinion or belief from another person or persons.â A similar definition is found in Kahn and Cannellâs (1957, p. 16) influential text: âWe use the term interview to refer to a specialized pattern of verbal interactionâinitiated for a specific purpose, and focused on some specific content area, with consequent elimination of extraneous material. Moreover, the interview is a pattern of interaction in which the role relationship of interviewer and respondent is highly specialized, its specific characteristics depending somewhat on the purpose and character of the interview.â
Any assertion about uniformity of approach must be advanced with caution. Nonetheless these definitions appear to be widely accepted among investigators, as is evident from examination of studies based on interviews as well as of research on problems of interviewing, even when definitions either are more casual than those cited above or are left implicit. Schuman and Presser (1981, p. 1), for example, in reporting their studies of effects on responses of question wording and question order, do not provide a specific definition but refer in passing to the survey interview as combining sampling methods with âthe ancient but extremely efficient method of obtaining information from people by asking questions.â Sometimes the definition is even more oblique or indirect, as in Kidderâs (1981) revision of a standard text on methods. Kidder makes little distinction between questionnaire and interview and notes that in both âheavy reliance is placed on verbal reports from the subjects for information about the stimuli or experiences to which they are exposed and for knowledge of their behaviorâ (p. 146). And sometimes a definition is omitted even where it might be expected, as in the Interviewerâs Manual of the Survey Research Center (1976) at the University of Michigan, which includes extensive discussion of problems and much advice on how to conduct âthemâ but presents no explicit definition of interviews.
These instances of indirectness and implicitness presume that we all âknowâ what an interview is, at least if we are members of the research community, and that although there may still be technical problems interviewing is essentially nonproblematic as a method. Within this context of a taken-for-granted understanding, analyses and discussions of the interviewing method reveal the same assumptions that may more clearly be discerned in the explicit definitions cited earlier.
The first assumption is that an interview is a behavioral rather than a linguistic event. The definitions refer to an interview not as speech, or talk, or even communication, but as a âverbal exchange,â a âpattern of verbal interaction,â or a âverbal report.â In this way the definitions erase and remove from consideration the primary and distinctive characteristic of an interview as discourse, that is, as meaningful speech between interviewer and interviewee as speakers of a shared language. The difference between a conception of interviewing as a form of talk and a concept âverbal interchangeâ or âverbal interactionâ is far from trivial. It marks radically different understandings of the nature of the interview, of its special qualities, and of its problems.
Talk and behavior, as key alternative terms for conceptualizing interviews as well as other types of human action and experience, contrast with each other in highly significant ways.2 Situations and forms of talk have structuresâthat is, forms of systematic organizationâthat reflect the operation of several types of normative rulesâfor example, rules of syntax, semantics, and pragmatics, to use a familiar scheme. As is true of other culturally grounded norms, these rules guide how individuals enter into situations, define and frame their sense of what is appropriate or inappropriate to say, and provide the basis for their understandings of what is said. This view of talk applies specifically in interviews, as we shall see later, to both interviewersâ and respondentsâ understandings of the meaning and intent of questions and responses. Units of behavior, on the other hand, are arbitrary and fragmented and become connected and related to one another not through higher-order rules but through a history of past associations and reinforcements that varies from person to person. This view allows, and indeed encourages, interviewers and analysts to treat each question-answer pair as an isolated exchange.
The standard conception of interviewing as behavior, albeit verbal behavior, excludes explicit recognition of the cultural patterning of situationally relevant talk. The behavioral definition removes from consideration, in the analysis and interpretation of interviews, the normatively grounded and culturally shared understandings of interviews as particular types of speech situations. In turn, the consequent decontextualizing of questions and responses leads to a variety of problems in the analysis and interpretation of interview data. These problems are viewed as âtechnical,â that is, as problems that can be âsolvedâ through more precise and rigorous methods. They may usefully be thought of as research iatrogenic, generated by the behavioral approach itself rather than inherent in the interview. They result from the assumptions of the behavioral approach to interviewing, not from problems faced by all individuals in talking with and understanding one another. The problems include, for example, variation across interviewers, unreliability of coding, and the ambiguities and possible spuriousness of relationships among variables. Typical efforts to deal with them include, respectively, systematic interviewer training programs, elaborate coding manuals, and complex multivariate statistical analyses.
I am not mounting an argument against rigor and precision in research. Sophisticated, technical methods are integral to any scientific study. I am proposing, however, that the widespread view of interviews as behavioral events leads to the definition of certain problems as technical when the problem goes much deeper. Technical solutions are applied unreflectively, they become routine practice, and the presuppositions that underlie the approach remain unexamined. The sense of precision provided by these methods is illusory because they tend to obscure rather than illuminate the central problem in the interpretation of interviews, namely, the relationship between discourse and meaning.
One consequence of the behavioral approach is the almost total neglect by interview researchers of work by students of language on the rules, forms, and functions of questions and responses. There exists a respectable and instructive body of theoretical and empirical work on these topics by philosophers of language, linguists, sociolinguists, anthropologists, and sociologists. Dillon (1981), for example, recently compiled a preliminary bibliography of over two hundred articles on questioning as a form of speech, putting particular emphasis on studies in education and on the interactive functions of questions. His list includes only a handful of reports from the extensive literature in survey and opinion research, and in turn this literature, which focuses on different problems, rarely refers to work on questioning in linguistics and sociolinguistics.
Interest in this topic has grown over the past decade and a number of social scientists have explored linguistic and conversational rules that apply to questioning and answering in naturally occurring conversation. Goffman (1976), for example, examines linguistic and social constraints in conversation and the differences between replies and responses. Labov and Fanshel (1977) elaborate a formal set of rules for legitimate requests and their variants, with questions as one type of request. Mishler (1975a,b, 1978) shows systematic regularities in successive chains of questions and answers. Schegloff and Sacks (1973) and Sacks, Schegloff, and Jefferson (1974) develop the concept of adjacency pairs for the situation where a second speakerâs utterances are tied to and contingent in particular ways to a first speakerâs utterances, a conversational structure of which questions and answers are one important subtype. Briggs (1983, 1984) and Frake (1964, 1977) discuss the uses and problems of formal questioning procedures in ethnographic field research in other cultures.
This brief and noninclusive list is intended only to document the generalization made above that there is a serious and substantial tradition of theory and research on questions and answers, the central and distinctive feature of interviews, that is not represented in the dominant approach to interview research. Except for the few reports on survey research noted by each of them, there is an almost total lack of overlap between Dillonâs (1981) bibliography and the extensive bibliographies included in recent books summarizing studies of questions and answers in survey interviews by Dijkstra and van der Zouwen (1982) and by Schuman and Presser (1981). The relatively total neglect of linguistically oriented theoretical and empirical work on questions and answers by investigators in the survey research tradition directly reflects the definition adopted by the latter of the interview as a behavioral event, as a verbal interchange, rather than as a speech eventâthat is, as discourse.
A second assumption of the standard approach in interview research, closely linked to its behavioral bias, is its reliance on the stimulus-response paradigm of the experimental laboratory for conceptualization of the interview process and, consequently, for specification of issues for research. Brenner (1982, p. 131) explicitly invokes this model as a research framework in his review of studies of the âroleâ of interviewers and the ârulesâ of interviewing: âIt is useful, if only heuristically, to think of the question-answer process in the survey interview in stimulus-response terms . . . The stimulus-response analogy is useful because the only objective of survey interviewing consists in obtaining respondentsâ verbal reactions to the questions put to them, these meeting particular response requirements posed by the questions.â By specifying the objective as obtaining âverbal reactions,â Brenner makes explicit the connection between the stimulus-response model of interviewing and the behavioristic assumption. Brenner then draws implications from this analogy:
Attempts to implement the stimulus-response analogy, in as much as is possible, require, first the standardization of the questionnaire to be used in the interviews. In order to maximize the effect of the questions qua stimuli, it is also necessary to try to ensure that the interviewing techniques used do not affect the answering process other than in terms of facilitating the accomplishment of, in measurement terms, adequate responsesâthat is, answers which are contingent upon the questions alone . . . Also, in order to achieve reliability and precision in the ways in which interviews are conducted (both are prerequisites for assuming the equivalence of interviews in terms of interviewer-respondent interaction), the interviewing techniques must be determined, and standardized, before the data collection commences. (pp. 131â132)
By and large, research on problems of the interview has been framed within the stimulus-response paradigm, implicit reliance on its assumptions guiding the general direction of inquiry and generating the specific questions for study. The primary aim of this research and of recommendations for practice based on it is to ensure, in accord with Brennerâs prescription, the âequivalence of interviews in terms of interviewer-respondent interaction.â Because the âstimulusâ is a compound one, consisting in interviewer plus question, it is not surprising to find the majority of studies directed to two general questions: How are respondentsâ answers influenced by the form and wording of questions? and How are they influenced by interviewer characteristics?
The intent of these studies is to find ways to standardize the stimulus or, perhaps a better term, to neutralize it, so that responses may be interpreted clearly and unequivocally. That is, the aim is to ascertain respondentsâ âtrueâ opinions and to minimize possible distortions and biases in responses that may result from question or interviewer variables that interfere with respondentsâ abilities or wishes to express their ârealâ or âtrueâ views. Such potentially confounding variables include, for example, whether a question is phrased in negative or positive terms, the number and placement of alternative response categories, the sequential order of questions, and particular social attributes, expectations, or attitudes of interviewers.
Dijkstra and van der Zouwen (1982, p. 3), who refer to this as the general problem of âresponse effects,â note that the central concern of interview research is with âdistortions because of the effects of improper variables, that is, variables other than the respondentâs opinion, etc. that the researcher is interested in.â In a similar vein Hagenaars and Heinen (1982, p. 92), reviewing studies of the effects on responses of selected interviewer social characteristics, state that âthe main feature of the registered response that will be of interest is response bias: the difference between the registered score and the true score.â
This is not the place to detail the findings of a large number of studies; several recent reviews serve this purpose, for example, Cannell, Miller, and Oksenberg (1981), the papers in Dijkstra and van der Zouwen (1982), and the monograph by Schuman and Presser (1981). However, it is germane to my argument to assess in broad terms the net result of this line of investigative effort. The following generalization is warranted, I believe, as a statement of the level of understanding that has been achieved regarding the effects of interviewer and question variables: some variables, and perhaps all of them, have some effects on some, and perhaps all, types of response under some conditions. Or, restated in somewhat different terms: each stimulus variable studied may influence some feature(s) of a response, the magnitude and seriousness of the effect being a function of various contextual factors.
This is a disturbing conclusion, all the more so because such a statement could have been made prior to undertaking the studies. Further, the conclusion and the findings that it reflects have no practical implications for the design of any particular study because the possible relationships between stimulus and response variables have to be determined separately in each instance.
I am aware that this is a harsh and sweeping generalization. It may be mitigated to some extent by the observation that many investigators arrive at a similar conclusion, although they often place it in the more positive context of the evident need for future research. This mixture of criticism and hopefulness is expressed clearly by Presser (1983) in his recent essay review of three books on survey research methodology and practice, including the Dijkstra and van der Zouwen (1982) collection cited here. Presser, coauthor of another major study (Schuman and Presser, 1981), retains a more optimistic view than I do about the potential value of survey research, but his comments are in full agreement with the argument I have advanced here.
It is striking, though, how little influenced most survey practice is by this accumulated knowledge. The typical survey is conducted in ignorance or disregard of methodological findings . . . To begin with, methodological research sometimes produces conflicting findings or findings difficult to interpret. This is true, for instance, of studies of the differences between agree-disagree and forced-choice question formats ... In many other areas, data-collection issues have not been subjected to much systematic inquiry . . . Finally, methodological research sometimes produces results that have no clear implication for practice . . . meaning ... is affected by the order of the questions ... as with many other demonstrations of context effects, it points to the importance of contexts, but not to any practical guide for ordering survey items. (pp. 637â638)
Beza (1984), in an essay review of three different books reporting findings of within-survey experiments on such problems as question order and question form, including the Schuman and Presser (1981) study discussed below, arrives at a conclusion that echoes my own and Presserâs about the limited value of such studies for research practice: âPerhaps the most important conclusion to be drawn from the three books is that the answers to questions often depend on question form and respondent understanding. Consequently, investigators interested in assessing the impact of question form and respondent understanding need to conduct their own experiments within surveysâ (p. 37).
Given the extent and seriousness of these problemsâthe ambiguity and often contradictory nature of findings from methodological studies and the lack of any general guidelines that would apply across different studiesâwe can more easily understand why research reports and review essays are pervaded by âon the one hand, on the other handâ locutions, why caution is expressed about drawing firm conclusions or overgeneralizing from the data, and why interpretations are wrapped in layers of qualifications. Thus, DeLamater (1982), summarizing findings on the effects of variations in the wording of questions directed to the same topic, remarks: âIt may be incorrect to think that it is possible to have alternative wordings of the âsameâ item. Any change in wording can change the meaning of the question. Whether two items are equivalent should be treated as a question to be answered analytically, using techniques such as interitem correlations, factor analyses, and analyses which focus on substantive relationships involving each itemâ (p. 23). Noting the absence of âsystematicâ effects, that is, general effects that hold across surveys and content areas, he points to the significance of contextual relationships: âThe available research does not find systematic effects of either interviewer or respondent characteristics. When such person variables are related to responses, it is primarily in interaction with particular types of questions or characteristics of the data collection situationâ (p. 38).
Molenaar (1982) concludes in a similar vein regarding variation in question wording: âMoreover, hardly any experiment gives a decisive answer as to which of the question-wordings involved is more valid. Thus, also the direct practical utility of any generalizing statement may be said to be fairly restricted, in that it does not constitute practical guidelines for framing questionsâ (p. 51). Reviewing the effects of differences in the form of response alternatives, Molenaar asserts: âThe effects, however, will vary with the content of the questions and with the nature of the added contrasting alternative(s).â Similarly, with regard to the effects of directive as compared with nondirective questions, he states: âthe effects of directive question-forms on the responses, . . . seem to be dependent for example, on characteristics of the respondents, the content and the context of the question concernedâ (p. 70).
These citations could easily be multiplied, but it may be more useful to consider in some detail a particular example of a topic regarding which âdata collection issues have not been subjected to much systematic inquiryâ (Presser, 1983, p. 637). Brenner (1982) conducted one of the few studies that directly examines, through the analysis of tape-recorded interviews, whether interviewers...