The Problem
A key concern for any kind of ethnographic research is the question of reliability. How can we know when the information we gather through participant-observation, interviews, self-reporting, and other ethnographic research methods1 is reliable enough to form the basis for a well-tempered portrait of a group of people? How do we take care not to create caricatures out of human beings, or skew our results by relying too heavily on either the words of a few insiders or on our own preconceived notions?
Over the years, a number of qualitative checks have been developed. This is not the place to review them, but I will mention a few for illustrative purposes. One way is to make sure that each important interpretation (e.g. about a marriage pattern, economic practice, or cosmological point of view) has made use of two or more specific methods for gathering information for analysis. That is, we can ask ourselves whether or not the information gathered from interviews aligns with that from observations, researcher participation with self-reporting? The assumption is that analysis based on two or more sets of data at the same time has a better chance of being reliable than something that is based solely on one method of information gathering. This is often referred to as triangulation (e.g. Fetterman 1989: 91; Holliday 2002: 43; DeWalt and DeWalt 2011: 128; Glesne 2011: 47; Brynman and Bell 2016: 306ā307; Scott and Garner 2013: 185). In some circumstances, this can even involve multiple researchers recording the same event so that notes can be compared later (e.g. DeWalt and DeWalt 2011: 113).
An alternative to this is the saturation approach (e.g. Glaser and Strauss 1967), in which the researcher constantly compares inductively gathered information within the same category or subject in order to ensure that our explanations reach a level of internal reliability (e.g. Maykut and Morehouse 1994: 126ā149; Brynman and Bell 2016: 270). Saturation is reached by building up the bits and pieces of information for a given topic (e.g. naming practices, joking behavior, or a particular religious ritual) until no new information about that practice or belief is forthcoming. Saturation is an ideal and is never fully achieved, but there will be a moment when diminishing returns inform the researcher that it is time to move on to a new topic.
When using either saturation or methodological comparison we have to be sure to include notes about the contradictions and disagreements we find among the various members of a group. Only in this fashion can we properly construct nuanced depictions of disparate ways of lifeāeven as they occur within more or less common economic, social, and cultural patterns (e.g. Fetterman 1989: 35).
The long-term nature of ethnographic research projects can also help provide checks and balances, increasing the trustworthiness of our findings (for illustrative examples of how this can be done, see Pelto and Pelto 1978; Angrosino 2007). As David Fetterman (1989: 46) suggests: āWorking with people, day in and day out, for long periods of time is what gives ethnographic research its validity and vitality.ā
A form of methodological reflexivity can also serve as a check on our work. In this situation, researchers strive to produce and retain very clear records about what they do throughout the research project, so that others (or even themselves) might āauditā their work at a later date to check for consistency and other reliability issues (e.g. DeWalt and DeWalt 2011: 184ā185; Scott and Garner 2013: 243; Brynman and Bell 2016: 169). The idea here is that others should be able to follow the way we gathered our evidence and critically assess it. Not everyone agrees with this idea. Brinberg and McGrath (1985, 13), for example, suggest that this unnecessarily implies that something can ever achieve āfull validity,ā rather than being an ideal toward which we work. Roger Sanjek has noted that any suggestion that others would be able to actually replicate a singular fieldwork experience is spurious. As he puts it (Sanjek 1990. 394): āIn ethnography, āreliabilityā verges on affectation.ā Reflexivity, on the other hand, need not be aimed at replication. For most researchers, it is about letting the reader or viewer know enough about who they are and how they conducted the research so that they might better judge the validity of the work being offered to them (e.g. Sanjek 1990; Emerson et al. 2011). The authors just cited suggest that this kind of useful reflexivity can and should be embedded within our field notes, which we can then draw upon as required.
My own position is that reliability or validity do not have to be seen as an either/or proposition. I am in agreement with David Brinberg and Joseph McGrath, who have considered the issue from almost every angle. What they have ended up concluding is that āvalidity is like integrity, character, or quality, to be assessed relative to purposes and circumstancesā (Brinberg and McGrath 1985: 14). What I am suggesting, therefore, is that incorporating more counting into our standard qualitative projects at both the research and analytical levels will give us one more way to help assess our work relative to purpose. At the same time, there is nothing privileged about numbers. In the way that numbers are being used here, they are neither better nor worse than prose or other forms of investigation or analysis. As noted in the book numerous times, qualitative counting can be used effectively only in conjunction with other qualitative methods.
The issue of reliability plagued me from the very first time I did an extended ethnographic study. This occurred during my M.A. degree in anthropology, when I carried out eight months of fieldwork in a home for the elderly in Southwestern Ontario (Fife 1983). The research was conducted through lengthy visits to the home, broken up into extended periods of daily work, two or three times per week. At the time, I was heavily influenced by what became known as symbolic or interpretive anthropology. My main focus was on trying to figure out how rural people, who had previously overwhelmingly subscribed to a cultural belief in individual āindependenceā and āself-reliance,ā coped while living within an institution that redefined them as dependent human beings. Using the primary research methods of participant-observation, event analysis (e.g. birthday parties, family days, special events), and semi-structured and unstructured interviews, I came to learn about the ways that many residents used to reappropriate this government-run institution and reconstitute it as a āhome away from homeā and the workers in it as ājust like family.ā In keeping with their pre-home understandings of life, family was defined as the people who were supposed to look after each other. Therefore, by symbolically reconstituting both paid workers and the other inhabitants as analogues to oneās own family, residents were simultaneously recasting themselves as being āentitledā to the help and care the government institution supplied. This obviated the need to view themselves as ācharity casesā who were receiving āhand-outs,ā or to accept negative connotations they would have previously associated with social and economic dependency. The to-and-fro that occurred between residents, actual family members, and both workers and volunteers (who did in fact see the residents as dependent human beings) was fascinating to undercover. Still, I worried about relying too heavily upon what people said about themselves and not enough on what people were actually doing. How did I really know if my analysis was correct? Event analysis and fairly intensive periods focusing on the observation portion of participant-observation, while following the āsaturationā and ātriangulationā methods of data checking noted above, helped give some confidence in my findings. But I remember wishing that I had another method to check some of my most important explanations and understandings. Although I recorded the number of residents, staff members, and volunteersāalong with other similar kinds of informationāI never really thought of count...