The Seductions of Quantification
eBook - ePub

The Seductions of Quantification

Measuring Human Rights, Gender Violence, and Sex Trafficking

Sally Engle Merry

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

The Seductions of Quantification

Measuring Human Rights, Gender Violence, and Sex Trafficking

Sally Engle Merry

Book details
Book preview
Table of contents
Citations

About This Book

We live in a world where seemingly everything can be measured. We rely on indicators to translate social phenomena into simple, quantified terms, which in turn can be used to guide individuals, organizations, and governments in establishing policy. Yet counting things requires finding a way to make them comparable. And in the process of translating the confusion of social life into neat categories, we inevitably strip it of context and meaning—and risk hiding or distorting as much as we reveal.With The Seductions of Quantification, leading legal anthropologist Sally Engle Merry investigates the techniques by which information is gathered and analyzed in the production of global indicators on human rights, gender violence, and sex trafficking. Although such numbers convey an aura of objective truth and scientific validity, Merry argues persuasively that measurement systems constitute a form of power by incorporating theories about social change in their design but rarely explicitly acknowledging them. For instance, the US State Department's Trafficking in Persons Report, which ranks countries in terms of their compliance with antitrafficking activities, assumes that prosecuting traffickers as criminals is an effective corrective strategy—overlooking cultures where women and children are frequently sold by their own families. As Merry shows, indicators are indeed seductive in their promise of providing concrete knowledge about how the world works, but they are implemented most successfully when paired with context-rich qualitative accounts grounded in local knowledge.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Seductions of Quantification an online PDF/ePUB?
Yes, you can access The Seductions of Quantification by Sally Engle Merry in PDF and/or ePUB format, as well as other popular books in Social Sciences & Sociology. We have over one million books available in our catalogue for you to explore.

Information

Year
2016
ISBN
9780226261317

Chapter One

A World of Quantification

Quantification is seductive. It offers concrete, numerical information that allows for easy comparison and ranking of countries, schools, job applicants, teachers, and much else. It organizes and simplifies knowledge, facilitating decision making in the absence of more detailed, contextual information. By quantification, I mean the use of numbers to describe social phenomena in countable and commensurable terms. Quantification depends on constructing universal categories that make sense across national, class, religious, and regional lines. Categorized numbers can then be bundled together into more complex representations of social phenomena, such as good governance or the rule of law. These numbers convey an aura of objective truth and scientific authority despite the extensive interpretive work that goes into their construction.
Indeed, it is the capacity of numbers to provide knowledge of a complex and murky world that renders quantification so seductive. Numerical assessments such as indicators appeal to the desire for simple, accessible knowledge and to a basic human tendency to see the world in terms of hierarchies of reputation and status. Yet the process of translating the buzzing confusion of social life into neat categories that can be tabulated risks distorting the complexity of social phenomena. Counting things requires making them comparable, which means that they are inevitably stripped of their context, history, and meaning. Numerical knowledge is essential, yet if it is not closely connected to more qualitative forms of knowledge, it leads to oversimplification, homogenization, and the neglect of the surrounding social structure. Grounding quantitative knowledge in a qualitative analysis of categories, meanings, and practices produces better indicators. The current rush to quantification risks sacrificing the insights of rich, ethnographic accounts.
A comparison of the information produced by quantitative and qualitative methods of studying battered women’s treatment by the courts illustrates these differences. In 2006, I studied a nongovernmental organization (NGO) that did advocacy work with largely working-class and poor battered women, including African American, Caribbean, Latina, white, Asian, immigrant, lesbian, disabled, and formerly incarcerated women in New York City. These women became members of the organization. They carried out a human rights documentation project on the adequacy of New York City’s family courts for battered women. Fourteen of the members, all domestic violence survivors, interviewed seventy-five other domestic violence survivors about their experiences in the courts and produced a report that outlined a series of abuses. The women who were interviewed talked about losing custody of their children to their batterers despite being the primary caretakers, about inadequate measures for safety in the court buildings, and about the unprofessional conduct of judges and lawyers that they experienced when they raised claims of domestic violence (Voices of Women Organizing Project 2008). The final report compared these problems to the standards articulated in human rights conventions.
In contrast, at about the same time, the UN Office of the High Commissioner for Human Rights (OHCHR) developed a set of indicators for measuring violence against women that were to be used by any country around the world (discussed in chapter 7). Some indicators assessed the adequacy of law enforcement in dealing with domestic violence, the same problem the domestic violence survivors in New York City were examining. The indicators measured the “proportion of formal investigations of law enforcement officials for cases of violence against women resulting in disciplinary action or prosecution” and the “proportion of new recruits to police, social work, psychology, health (doctors, nurses, and others), education (teachers) completing a core curriculum on all forms of violence against women” (UN Office of the High Commissioner for Human Rights 2012a: 99). Thus these indicators measured some dimensions of the legal treatment of domestic violence but not the problems raised by the formerly battered women in the New York City study.
Clearly, these two efforts to document battered women’s experiences with legal institutions differ in the kinds of information they produced. While the first project was based on a particular local situation and generated its categories and questions from the experiences of those who went through it, the second did not address women’s experiences at all. The first one used local knowledge to decide what to count and measure, while the second relied on global measures that had already been developed and used in many different countries. The first effort took into account the ethnicity and social class of the people interviewed as well as the history of the New York City court system, while the second did not. On the other hand, its indicators allowed comparison across cultural contexts and countries in a way that the first approach did not, and it was better able to show the global size and scope of the issue.
This book focuses on the disparity between such qualitative, locally informed systems of knowledge production and more quantified systems with global reach. It argues that despite the value of numbers for exposing problems and tracking their distribution, they provide knowledge that is decontextualized, homogenized, and remote from local systems of meaning. Indicators risk producing knowledge that is partial, distorted, and misleading. Since indicators are often used for policy formation and governance, it is important to examine how they produce knowledge.
Interest in global indicators is now booming. Efforts to measure a wide variety of social phenomena took off in the mid-1990s as scholars and organizations developed indicators for such diverse issues as failed states, transparency, poverty levels, the rule of law, good governance, and the human right to health. Although indicators were developed in the mid-twentieth century to describe economic phenomena such as gross domestic product (GDP), by the end of the century, this technology was being applied to a range of social phenomena. The use of quantitative measures by national and international governments and organizations, as well as by academics and NGOs, has continued to grow in response to the demands of policy makers and the public for information about the world and as an aid to governance.
The contemporary proliferation of indicators used as a mode of governance springs, in large part, from the desire for accountability. How can states or civil society hold governments, corporations, and individuals responsible for their actions? How can donors be sure the organizations they fund accomplish what they have promised? Accountability requires information. Quantitative data, folded into simple and accessible indicators, seem ideal. Indicators of freedom, human rights compliance, trafficking in persons, and economic development are all efforts to measure country performance against global standards and to hold states accountable for their actions. Such quantitative measures promise to provide accurate information that allows policy makers, investors, government officials, and the general public to make informed decisions. The information appears to be objective, scientific, and transparent. Indicators are appealing because they claim to stand above politics, offering rational, technical knowledge that is disinterested and the product of expertise. Once indicators are established and settled, they are typically portrayed in the media as accurate descriptions of the world. They offer forms of information that satisfy the unease and anxiety of living in a complex and ultimately unknowable world. They address a desire for unambiguous knowledge, free of political bias. Statistical information can be used to legitimate political decisions as being scientific and evidence-based in a time when politics is questioned. They are buoyed up by the rise in bureaucracy and faith in solutions to problems that rely on statistical expertise. Such technocratic knowledge seems more reliable than political perspectives in generating solutions to problems, since it appears pragmatic and instrumental rather than ideological. These are the seductions of quantification.

Knowledge Effects and Governance Effects

Numbers packaged into concepts that describe social life are now central to how many people understand the world they live in. They are also central to governance. There is currently a surge of interest in systems of performance monitoring and evaluation, for example. Holding states or corporations accountable requires information on their violations. Evidence-based decision making, experimentalism, audit mechanisms, results-based management, and new public management are emerging forms of governance that rely on measurement and counting. All these forms of governance require knowledge that is classified, categorized, and arranged into hierarchies. In other words, indicators have both a knowledge effect and a governance effect.
Despite the contemporary prominence of quantified knowledge, there has been relatively little analysis of its effects on knowledge and governance. Much of the scholarship on indicators focuses on how to develop an effective, reliable, and valid measure: how to conceptualize what is to be measured, how to operationalize broad and vague concepts, what data sets are available that can be used, how to label indicators so that they will be easy to understand and use, and how to generate buy-in from governments, donors, and other potential users of the indicator. The challenges of measurement, comparability, weighting of factors, and gathering reliable data in very different historical and cultural contexts are well known and widely discussed.
My focus, however, is not on the accuracy of indicators but on the social and political processes of indicator production and their effects on regulation and governance. My ethnographic examination of the way indicators are constructed and used shows that they reflect the social and cultural worlds of the actors and organizations that create them and the regimes of power within which they are formed. This social aspect of indicators is typically ignored in the face of trust in numbers, cultural assumptions about the objectivity of numbers, and the value of technical rationality.
Statistical knowledge is often viewed as nonpolitical by its creators and users. It flies under the radar of social and political analysis as a form of power. Yet how such numerical assessments are created, produced, cast into the world, and used has significant implications for the way the world is understood and governed. Quantitative information influences aid to developing countries, investment decisions, choices of tourist destinations, and many other decisions. A country with poor indicators for the rule of law, human rights compliance, and trafficking invites international intervention and management. Rather than objective representations of the world, such quantifications are social constructs formed through protracted social processes of consensus building and contestation. Once established and recognized, they often circulate beyond the sphere envisioned by their original creators and lose their moorings in specific methodological choices and compromises.
Beneath the “truth” of quantified knowledge, indicators are part of a regime of power based on the collection and analysis of data and their representation. It is important to see who is creating the indicators, where these people come from, and what forms of expertise they have. Rather than revealing truth, indicators create it. However, the result is not simply a fiction but a particular way of dividing up and making known one reality among many possibilities. As indicators cross the gap from social science knowledge to that used by policy makers and the public, the drawbacks and complexities recognized by their creators, such as limited data, the use of proxies, and the uncertainty of flawed or missing data, are typically stripped away. The indicators are presented as unambiguous and objective, grounded in the certainty of numbers. In this form, they act to produce a truth about the world despite the pragmatic compromises that inevitably arise in their creation. Data are never complete and may not measure exactly what the author of the indicator seeks to assess. Thus the truth of indicators can be quite misleading. For example, Morten Jerven illustrates this problem in his analysis of the flaws in information available on African economies and the impact they have on development planning (2013).
The core question of this book is how the production and use of global indicators are shaped by inequalities in power and expertise. It examines the power dimensions of indicators through an ethnographic analysis of the actors and institutions of the human rights movement engaged in the creation and use of three global indicators: indicators focused on violence against women, indicators on trafficking in persons, and indicators of human rights violations. Through a genealogical analysis of these three global indicators, I trace the gradual process of constructing indicators from the fragments of earlier ones and the cultural assumptions and theories of social change embedded in them.

The Genealogical Method

The genealogical method asks how an indicator develops, which actors and institutions promote and finance it, and how and when its features become settled (see Halliday and Shaffer 2015). It considers how the creators grapple with converting the broad terms of a standard into a series of measurable and named phenomena. Measurement generally builds on previous models and approaches, refining or expanding them or correcting their recognized problems. Adapting existing templates and forms of data analysis and presentation requires expert knowledge, producing what I call “expertise inertia.” Expertise inertia means that insiders with skills and experience have a greater say in developing measurement systems than those without—a pattern that excludes the inexperienced and powerless. At the global level, experts are usually cosmopolitan elites with advanced education or people who have had previous experience in developing indicators of the same kind. They are often from the global North and trained in political science, economics, or statistics. Some are social scientists who research social phenomena such as political terror or violence against women.
Countries that have carried out relevant surveys create the models for the next set of surveys. The statisticians from these countries become global experts. In the context of global governance, this means that when experts gather to develop indicators and plan data collection, those from countries that have already tried such data gathering and analysis projects claim special knowledge and authority. For example, in an expert group meeting that I attended in Geneva in 2009, about twenty participants worked on developing measurements of violence against women. Representatives from Italy, Canada, and the United States talked about how such surveys had worked in their countries. People from poorer countries that had not yet carried out surveys of violence against women could not offer such authoritative expert knowledge. To understand how indicators are formed and developed, it is necessary to attend to the microprocesses through which surveys are created, categories defined, phenomena named, translations enacted. The microprocesses are, in turn, shaped by the actors, institutions, funding, and forms of expertise at play. This means that categories and models based on local knowledge are difficult to incorporate.
Those who create indicators grapple with the problem of finding or collecting data relevant to what they want to measure. Gathering data is expensive. Unless the sponsoring organization has funds to collect new data, it must locate existing data that can serve as proxies for the qualities being measured. This includes administrative data, regularly collected by governments (such as census data) or private organizations (such as electricity consumption), and social science data developed for research. Indicator creators with the resources to collect their own data may use population surveys targeted to the particular question they are interested in, but these are expensive. A cheaper alternative is the expert opinion survey. For example, instead of surveying those in the general population about their experiences of corruption, the organization can send questionnaires to local experts about the prevalence of corruption in their country. This is clearly less expensive, but also less comprehensive and accurate. Those without resources have to search out existing databases, which may not actually measure what the indicator seeks to count. The fact that existing data determine what an indicator can measure is what I call “data inertia.” It is relatively hard to address new problems without new data collection, so the way categories are created and measured often depends on what data are available.
Both of these forms of inertia inhibit new approaches to measurement and tend to exclude inexperienced and resource-poor actors from having much influence on what is measured. They relegate those with local knowledge to the sidelines. Since those who choose the template and the modes of data collection are typically powerful individuals with experience and connections to statistically advanced countries, this means that powerful and wealthy countries are likely to set the models for less powerful ones and that weaker states and nonstate actors will have difficulty influencing the shape of the indicators.
Thus it is important to track what forms of expertise are involved in creating an indicator, who pays for the experts, who funds data collection, and which organizations develop and promote the indicator. Those with experience in developing similar indicators are more often listened to and have greater influence in designing the indicator than newcomers. Local, vernacular knowledge is typically less influential than more global, technical knowledge and, based on my attendance at meetings and reading of documents, often does not enter into the discussion at all.

Temporal Dimensions of Indicator Production

The microprocesses of indicator production take place over time. Indicators and other forms of quantitative knowledge are built up through a slow, incremental process. Many are years in the making. Some of the measures deemed most successful by the UN Statistical Commission, for example, are gross domestic product, instituted in the late 1930s, and the system of national accounts, developed first in the 1950s. Both of these measures initially required substantial theoretical work, including developing the idea that such concepts were even measurable. They also needed the creation of templates and measurement devices, mechanisms for classifying and counting, and names for the objects of measurement. They had to be presented through publicly accessible aesthetic forms and labels. Creating and maintaining indicators requires building up bodies of experts who understand them. Over time, indicators are revised as circumstances change but often remain the same in name and conception. In a few cases, indicators such as these achieve broad public acceptance. Debates continue about the details of how to measure and what to include, but the underlying concepts and measurement strategies are established.
Thus indicators gradually become more settled and less open to change. Indicator frameworks, templates, and measurements generally begin with open discussion among alternative measurement strategies and forms of data but gradually become more established and certain. This process often takes two or three decades. As the indicator crystallizes and becomes naturalized, flexible categories and proxies become fixed and unchangeable. Contestation about the indicator’s underlying framework, use of data, and categories of analysis becomes more difficult over time. After a certain point, critics often succeed only in adding a variable or value. Some issues seem settled and not open to debate, while others require continuing efforts at refinement. Some of these debates concern classification and measurement, while others focus on what is to be measured and by whom. Tracing the development of indicators, their institutional basis, and the limited opportunities for their contestation and refusal reveals their quiet exercise of power.

The Ethnography of Indicators

This project is based on six years of intensive ethnographic field research that involved attending innumerable meetings and workshops, discussions with participants and others involved in global indicator projects, interviews with the major players in each of the three indicator initiatives I studied, and formal a...

Table of contents