Evidence
eBook - ePub

Evidence

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

About this book

Howard S. Becker is a master of his discipline. His reputation as a teacher, as well as a sociologist, is supported by his best-selling quartet of sociological guidebooks: Writing for Social Scientists, Tricks of the Trade, Telling About Society, and What About Mozart? What About Murder? It turns out that the master sociologist has yet one more trick up his sleeve—a fifth guidebook, Evidence.

Becker has for seventy years been mulling over the problem of evidence. He argues that social scientists don't take questions about the usefulness of their data as evidence for their ideas seriously enough. For example, researchers have long used the occupation of a person's father as evidence of the family's social class, but studies have shown this to be a flawed measure—for one thing, a lot of people answer that question too vaguely to make the reasoning plausible. The book is filled with examples like this, and Becker uses them to expose a series of errors, suggesting ways to avoid them, or even to turn them into research topics in their own right. He argues strongly that because no data-gathering method produces totally reliable information, a big part of the research job consists of getting rid of error. Readers will find Becker's newest guidebook a valuable tool, useful for social scientists of every variety.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Evidence by Howard S. Becker in PDF and/or ePUB format, as well as other popular books in Social Sciences & Anthropology. We have over one million books available in our catalogue for you to explore.

Part 1

What It’s All About: Data, Evidence, and Ideas

A Research Problem

Studying Children’s Social Class Position

In the early 1960s, Paul Wallin and Leslie C. Waldo, two Stanford sociologists, wanted to learn how social class affected children’s school performance (a question that still concerns social scientists). They administered a questionnaire to 2,002 eighth-grade boys and girls. To measure social class, they asked the children to answer a question from August Hollingshead’s then well known and often used Index of Social Position (the Hollingshead version assigned family class position based on the answers to this question and a similar question about education):
[Describe] your real father, if you live with him. If you are not living with your real father answer . . . about the man you live with who is supposed to be taking his place. It may be a step-father, foster father, an uncle or somebody else.
Most of the time does he work for himself or for somebody else?
____ he works for himself or has his own business
____ he works for somebody else
____ I don’t know what he does
What is his work or job most of the time?
He _________________________. (Wallin and Waldo 1964, 291)
The two sociologists supplemented the sometimes sketchy answers with two additional sources of information: school records and records kept by school nurses.
Wallin and Waldo don’t describe the content of their research or the uses they intended these data for. Nor do they discuss the problems of meaning I’ll raise shortly. But they probably thought a father’s occupation would serve as a substantial clue to (if not a definitive measurement of) social class, a combination of the economic and social realities of the parents’ way of life and the lives their children might have. They thought this report of the work the father did, this single fact, would give them an indirect way to guess at the family’s income and wealth, and thus an inexact, perhaps not explicitly formulated but nevertheless not meaningless, measure of the parents’ hopes for their children’s education. And, beyond that, the way of life and (why not?) the family culture that would send the children into the adult world with what we now call cultural as well as economic capital, opening some possibilities to them and closing off others—all this related to the level the father reached on the Hollingshead scale. Every social scientist who puts such questions into a questionnaire has some version of these uses in mind.
Knowing the family’s social position would almost surely suggest more specific imagery readers could use to flesh out the implications of class position. Researchers often invoke such images when they write about the “theoretical implications” of a project’s results. The first volume of W. Lloyd Warner’s Yankee City Series (1941–59), at the time of the Wallin-Waldo research a well-known years-long study, in an anthropological style, of a small New England community contained a series of lengthy and detailed composite portraits (constructed from details pulled out of material on many somewhat similar families) of family life at different class levels, from “lower-lower” to “upper-upper.” And James Bossard’s studies of family table talk (1943, 1944), based on verbatim accounts of what researchers heard real families say over their meals, gave examples of the mechanisms at work in daily life that might create an observable link between the work the father did and the kind of opportunities and life it opened up for children—all of this being what the idea (you could say, as many people often do say, “the concept”) of “social class” might evoke in a social scientist stimulated by answers to such a questionnaire item. Here’s Bossard:
Much of the family’s sense of economic values, and the child’s training in them, are indicated in the following sentences appearing repeatedly in the case material upon which this article is based. “Go easy on the butter, it’s fifty cents a pound.” “Eggs are sixty cents a dozen now.” “Bill’s shoes have to be soled.” “What, again? Why I just paid two dollars for soles three weeks ago.” “I think you ought to be ashamed to waste bread when thousands of Chinese children are starving.” “Mother, Mary soiled her new dress.” “Well, she had better take care of it. We can’t buy another until after Christmas.” It is the absorption of values of this kind, so constant in normal family life, which constitutes such a big gap in the training of the child reared in an institution. (Bossard 1943, 300)

Did Wallin and Waldo’s Data Support Such Conclusions?

Not really. Their article focused instead on a simple problem that arose before they could start making extrapolations like that: the possibility that, because of an indeterminacy they had discovered, the data they so carefully gathered from children couldn’t serve as adequate evidence for any conclusions about anything. They described their problem this way: “Rankings were classified as indeterminate in the present study if there was any question or uncertainty about them because of either limitations of the data or of the occupational scale. Rankings were assigned to 2,002 boys and girls and 17 per cent of the rankings were judged to be indeterminate. An additional 111 respondents could be given no ranking because of complete lack of information from any of our three sources or because of an insufficiency of information that would have made a sheer guess of the ranking [in short, over 22 percent of the respondents couldn’t be classified]” (Wallin and Waldo 1964, 291–92). How were 17 percent of the responses “indeterminate”? Some sizable number of children gave vague answers like “[My father] works for Ford” or “the telephone company” or “He’s in the Navy,” or nonspecific answers like “He sells” or “He’s a contractor.” When they answered the question that way, the researchers couldn’t tell which of the seven classes in Hollingshead’s scale the work so described fit into. Nor could they classify vague answers like “minister,” which might refer to the graduate of a university-based seminary now preaching to an Episcopalian congregation in a well-to-do suburb, but could just as well mean the self-appointed, self-educated pastor of a storefront church in a slum. Nor could they classify the fathers whose children described their work with specific terms the researchers couldn’t locate in standard compendiums of occupational titles, such as “assistant Social president,” “cargo-loading specialist,” “burner in an ironworks,” “marine engineer,” “insurance underwriter,” and “labor-relations agent.” You could guess at what those words represented, but you couldn’t convince a skeptic that they gave you a solid measurement of anything.
In other words, as much as 17 percent of their data failed to give a sufficiently classifiable measure of what it said it measured to make classification possible. In addition, another 111 respondents (5.5%) didn’t answer the question at all. Altogether, the researchers couldn’t classify almost a quarter of the respondents, who thus had to be left out of the tables intended to support the conclusions the study hoped to arrive at. That’s a large enough part of the population surveyed for errors of classification to substantially change the direction of any statistical association found in the tables summarizing the data. Which suggests that you can’t trust data furnished by people about their own behavior without some independent corroboration.
Wallin and Waldo’s problem was not just their bad luck, and certainly not their incompetence. The incident has a wider interest, a larger application.

Data, Evidence, and Ideas

Social scientists combine three components—data, evidence, and ideas (sometimes called “theories” or “concepts”)—to convince themselves, their colleagues, maybe even a wider audience, that they have found something true, something more than a coincidence or an accident.
The things social scientists observe, however they observe them, and then record more permanently in writing, visual images, or audio recordings—the material they work with—consist of observable physical objects: marks produced by machines, such as the tracings an EKG machine uses to record the electrical activity of a beating heart; marks produced by the people who check a box on a questionnaire or write something a sociologist or historian might find and use; marks social scientists make when they write down what they’ve seen or heard; marks produced by people who record their own behavior as part of the work they do (as police officers record the names of people they arrest and the offense they charge them with); marks produced by employees or volunteers who collaborate with social scientists to record what the people they want to learn about tell them or do in their presence. These recorded traces serve as data, the raw material researchers make science from. Wallin and Waldo’s data consisted of what students wrote on the questionnaire in response to the questions it asked them.
These data, these preserved records of information gathered, become evidence when scientists use them to support an argument: good evidence when the audience accepts the items as valid statements about what happened when someone gathered the original data. We base a statement about a person’s age on the proof provided by a recorded answer to a question someone asks them, on paper or in person, or on information someone copies from an official record of the birth preserved in a local depository of birth certificates—all these kinds of data usually attest well enough to the reliability and truth of the answer that people accept the argument we offer it as support for. “Yes, she really is 22 years old”; her birth certificate proves it as well as any reasonable person could want it proved. And that makes it evidence, data supporting a statement that goes beyond what can be seen on the paper to a reality, an accepted fact. The paper serves as the observable evidence for the fact of age. The word accepted in accepted fact reminds us that the evidence has to convince someone of its validity, its weight, to become evidence.
The data-turned-into-evidence support a statement about a particular example of some general idea we want other people (fellow members of our scientific tribe, people in other fields, politicians, the general public) to believe or at least accept for now. For scientists, the idea usually belongs to a more general system of ideas or concepts that we call a “theory.” The support the data give to an idea turn it into evidence.
Data, evidence, and ideas make a circle of interdependencies. The data interest us because they help us make an argument about something in the world that they would be consequential for. Expecting that others may not accept our argument, we collect information we expect will convince them that no one could have recorded reality in that form if our argument wasn’t correct. And the idea we want to advance leads us to search for kinds of data, things we can observe and record, that will do that work of convincing others for us. The usefulness of each of the three components depends on how it connects to the other two. No one will accept our idea if the data we offer in evidence don’t compel belief, if our argument about what the data show, what they are evidence for, doesn’t convince people that it supports our idea as we say it does.
How does this apply to Wallin and Waldo? They wanted to offer their data—the answers students wrote on the questionnaires—as evidence of the work the fathers actually did, to present the students’ testimony about their father’s employment as a reality we could count on as support for ideas they had about the larger, complex reality the words “social class” allude to, ideas they wanted their readers to accept.

Polya on Plausibility as an Appropriate Goal for Empirical Science

When I speak of data supporting an idea, I have in mind the version of this argument the mathematician George Polya (1954) reminded empirical scientists about in his analysis of plausible reasoning. I quote it at length because it’s the fundamental approach I’ve followed in this book:
Strictly speaking, all our knowledge outside mathematics and demonstrative logic (which is in fact, a branch of mathematics) consists of conjectures. There are, of course, conjectures and conjectures. There are highly respectable and reliable conjectures as those expressed in certain general laws of physical science. There are other conjectures, neither reliable or respectable, some of which make you angry when you read them in a newspaper. And in between there are all sorts of conjectures, hunches, and guesses.
We secure our mathematical knowledge by demonstrative reasoning, but we support our conjectures by plausible reasoning. A mathematical proof is demonstrative reasoning, but the inductive evidence of the physicist, the circumstantial evidence of the lawyer, the documentary evidence of the historian, and the statistical evidence of the economist belong to plausible reasoning.
The difference between the two kinds of reasoning is great and manifold. Demonstrative reasoning is safe, beyond controversy, and final. Plausible reasoning is hazardous, controversial, and provisional. Demonstrative reasoning penetrates the sciences just as far as mathematics does, but it is in itself (as mathematics is in itself) incapable of yielding essentially new knowledge about the world around us. Anything new that we learn about the world involves plausible reasoning, which is the only kind of reasoning for which we care in everyday affairs. Demonstrative reasoning has rigid standards, codified and clarified by logic (formal or demonstrative logic), which is the theory of demonstrative reasoning. The standards of plausible reasoning are fluid, and there is no theory of such reasoning that could be compared to demonstrative logic in clarity or would command comparable consensus. (v)
The rest of this book, everything that follows, consists of guesses that seem plausible to me, and I hope to you, on the basis of evidence I’ve provided. I expect social science reports to consist of statements supported by reasonable arguments and data that suggest plausible, believable conclusions. But I also expect, as a working scientist, that most of what we think is true will someday turn out to be not so true, to be subject to all sorts of variations our present formulations and data can’t explain. I expect them to explain some part of the puzzle but leave plenty of work still to be done.

Back to Wallin and Waldo

They realized that their data couldn’t plausibly support whatever they had hoped to say about social class, class cultures, education, and aspects of child socialization. The evidence they had intended to present was fatally flawed by the unarguable fact that 22 percent of the children hadn’t given them the information they needed to make such arguments plausible—because when you don’t know how to classify almost a quarter of the people furnishing the data, when you don’t know which group to count them in, no differences in all those other things your ideas suggest were related to social class can be trusted. What if those “ministers” you ended up treating as “really” credentialed pastors of solidly middle-class churches in fact had no claim to being ministers other than their own belief that they were called to that line of work, as you might have learned had you visited their churches and talked to them? What if the father whose child said he “worked for the telephone company” and whom you had classified as an executive was really a janitor who cleaned the executives’ offices at night, or a trained technician who climbed telephone poles all day repairing broken wires? Or the other way around? Wallin and Waldo saw that they didn’t have plausible evidence for the fine-grained arguments they had hoped to be able to make about class and culture and all the rest of it.
That’s why they were perturbed. But they gave us even more reason to worry, because they found only one article in the entire sociological literature in which any of the many researchers who had studied similar problems with similar methods had mentioned any such difficulty:
Our research had three potential sources from which the required occupational data might be obtained. It is therefore highly probable that we had a firmer basis for the occupational ranking of families than other studies of school children that have been restricted to the data secured from the survey respondents. Insofar as this assumption is justified it can be presumed that the incidence of indeterminate rankings in these studies was substantially greater than in ours.
Indeterminate rankings are errors of measurement that should be reported as qualifying the findings of any research. Beyond this, an investigator’s awareness of the number and magnitude of his indeterminate rankings may indicate to him the desirability of using a less refined ranking of occupations than was originally planned. (Wallin and Waldo 1964, 292).
You have to dig what they really meant to say out of their circumspectly worded suspicions, but it appears that no other researchers, other than one other team based at Stanford, had ever experienced or reported such a problem. Wallin and Waldo prudently left the evident conclusion unstated. I’ll state it. Other people had had such a problem (how could that not be true?) but had solved it somehow without reporting either the problem or the solution. When you consider how often researchers had used (and still use) such scales to measure social class, you have to recognize that studies of social class based on such data must contain a lot of unreported, unmeasured error. But the use of those instruments assumes that the researchers successfully measured the relevant variables for all the cases. Which might account for the persistent anomalies and contradictions reported in such research areas.
Wallin and Waldo’s problem, and its many analogues, crop up in one form or another everywhere. Such problems appear in every variety of social science research, using every kind of method to collect every kind of data. We ought to regard them as the normal problems of our work, and we ought to broaden our understanding of what we do so that “normal science,” for us, routinely includes attending to such difficulties with an eye to getting rid of them as spoilers of our data. But we also ought to think about using them, more positively, as ways to open up new areas of research. As the distinguished survey researcher Howard Schuman said years ago (1982, 23), “The problems that occur in surveys are opportunities for understanding once we take them seriously as facts of life.” Which hints that he thought our colleagues in social science weren’t taking these problems as seriously as they should.

Another Problem, Another Idea, a Possible Solution

Suppose your own research data don’t show the indeterminacy that gave Wallin and Waldo such problems. You ask a question and everyone gives you definite, easily interpreted answers. Perhaps you asked them to choose between specific, well-defined alternatives, as when someone asks your age and gives you a list of ranges to choose from: 18–25, 26–45, and so on, up to 80+. No confusion about what the answers mean, no problems about which category to classify the respondents in. A lot of information social scientists gather seems to be like that.
For instance, questionnaires often ask how frequently the respondent does something—visits relatives, commits illegal acts, almost anything a researcher might be interested in—and require that the answer be an integer, a whole number. Some well-known surveys of public participation in the arts relied on answers to ...

Table of contents

  1. Cover
  2. Title Page
  3. Copyright Page
  4. Dedication
  5. Contents
  6. Acknowledgments
  7. Part I: What It’s All About: Data, Evidence, and Ideas
  8. Part II: Who Collects the Data and How Do They Do It?
  9. Afterword / Final Thoughts
  10. References
  11. Index