1
Introduction
Deborah G. Johnson and Priscilla M. Regan
In this volume, we bring surveillance and transparency practices together under the same lens. Although each type of practice has been studied extensively on its own, the two are rarely (if ever) examined together. Surveillance and transparency are both significant and increasingly pervasive activities in neoliberal societies. Surveillance is increasingly taken up as a means to achieving security and efficiency while transparency is increasingly seen as a mechanism for ensuring compliance or promoting informed consumerism and informed citizenship. Indeed, transparency is often seen as the antidote to the threats and fears of surveillance. We adopt a novel approach and examine surveillance practices and transparency practices together as parallel systems of accountability.
Practices of holding and being held to account are deeply embedded in daily life. Calls for individuals and organizations to account for their behavior (e.g., BP Oil, Bernie Madoff, Anthony Weiner) seem to be linked to strongly felt notions of justice, responsibility, and fairness. This sensibility even seems to underlie the impetus toward democracy when, for example, citizens hold nonelected leaders and regimes accountable for their failure to satisfy their basic needs or rights. More prosaically, in democracy, insofar as elected officials serve at the will of the governed, they are accountable to the electorate, and in this respect accountability is essential to the realization of democracy.
Transparency is a practice that is explicitly targeted to achieve accountability. Citizens in a democracy cannot, for example, hold their representatives accountableâthey cannot evaluate, complain, or vote them outâunless they know what they are doing. In theory, at least, transparency pressures leaders and institutions to behave as their constituents expect, that is, to both behave lawfully and be responsive to their concerns. The aphorism of transparency discourse is that âsunlight disinfects.â Those who are required to be transparent are less likely to violate their public trust, to deflect or neglect their responsibilities.
Surveillance, on the other hand, is only vaguely recognized as a form of accountability; that is, surveillance seems to be used for many purposes not ordinarily thought of as accountability. More important, surveillance is often seen as a threat to democracy rather than an essential component. Citizens may believe surveillance is not legally justified or fear the revelation of undesirable, albeit not illegal, information when they are tracked and monitored. Hence, freedom may be retarded and rights quietly diminished. Some believe surveillance may even undermine the development of the kind of personalities needed for democratic citizenship (Rule 1974; Flaherty 1989; Reiman 1995; Lyon 2001).
In the past half century, government and civil institutions have increasingly been constituted with computers and information technology.1 In particular, these technologies have been used to enable and shape transparency and surveillance practices. Each new technological capacity, from simple data collection to the Internet and websites, search engines, social networking sites, Twitter, and YouTube, has been used to reconfigure practices by which various individuals, groups, and organizations reveal information about themselves as well as practices by which they are observed, tracked, and monitored.
Scholars and social commentators have painted a mixed picture of the significance of adopting these new technological capacities. One strain of literature and hype suggests that computers and information technology have the potential to enhance democratic institutions as never before possible. The availability of information and the connectivity of individuals across the globe promote, facilitate, and inevitably lead to democracy (Barber 1984; Bimber 1998; Brinkerhoff 2009). At the other extreme are analyses suggesting information technology will ultimately lead to totalitarian control (Ellul 1964). From the first days of computer usage, some social theorists were concerned about the potential of computers to facilitate centralization of power and autocratic control (Westin 1967; Miller 1971; Burnham 1983); yet others suggest that the significance of computers and information technology for democracy is multidirectional, a mixed and complicated picture (Ferkis 1969; Glaser 1971; Winner 1977; Beninger 1986; Gandy 1993).
To some extent, these seemingly contradictory claims about the implications of information technology for democracy can be explained by the wide-ranging and malleable capacities of technology. For example, claims that information technology will lead inevitably to democracy tend to focus on the Internet and many-to-many communication, while claims about the potential of technology for centralization of power tend to focus on the scale of information gathering and the threat to personal privacy. So the relationship (if we can call it that) between computers and information technology and democracy is far from clear; the question is, perhaps, too crude to yield insight into that which is obviously a complicated phenomenon.
In this volume, we make no grand hypotheses about the information technologyâdemocracy connection. Instead, we examine a set of case studies to understand how transparency and surveillance work when they are instrumented through information technology. The challenge is to explore the information technologyâdemocracy connection by framing electronic transparency systems and electronic surveillance systems as parallel systems of accountability. In this framework, democracy moves to the background as we ask simply: how do electronic transparency systems work? And, how do electronic surveillance systems work? Our presumption is that American democracy is currently constituted in part by electronic transparency and surveillance systems and that in order to understand the information technologyâdemocracy connection, we must first understand how these systems operate.
Although the case studies we examine are American and our discussion of democracy is primarily focused on the United States, our analysis has implications for surveillance and transparency practices situated elsewhere.
The Framework: Parallel Systems of Accountability
Why frame electronic transparency and electronic surveillance together? The simple answer is that at their core, both have the same triad of elements. In both, there are watchers, those who are watched, and accounts (of those being watched). Who produces the accounts is different in each case, but in both, accounts are produced and the accounts are used by watchers to hold the watched accountable. The promise is that examining surveillance and transparency together as parallel systems and developing an analysis built on the simple structure of watchers, watched, and accounts will yield a new and deeper understanding of each.
To be sure, the rationales for systems of transparency and systems of surveillance are generally quite different, as are the institutional arrangements that make up each type of system. We generally think of surveillance as being done by institutions and about individuals for purposes that target the individuals or groups for some sort of action, be it to determine whether the individual is engaging in illegal activity, to provide an individual with a purchasing opportunity, or to stop the individual from boarding an airplane. By contrast, we generally think of transparency as a practice involving individuals or institutions that provide information about themselves in the name of reassuring various constituents by documenting their compliance with legal requirements or shaping opinions by emphasizing certain interpretations of information. In surveillance practices, those who are being watched seem to be passive, while in transparency those who are being watched are active; they control and produce the accounts of themselves.
In both, accounts are focused on a particular domain of activity of interest to the watchers and the lens of watching involves norms for that domain: that is, watchers want to know whether or not those whom they watch fit certain categories (exhibit certain patterns of behavior) or adhere to particular norms. For example, when public officials reveal their financial records, they do so in relation to a norm (a law) that prohibits public officials from engaging in certain kinds of financial arrangements. When advertisers classify their potential customers into various categories on the basis of their browsing behavior, the categories work as descriptive norms; potential customers are treated according to which category they fit. The norm here is an expectation or prediction that the subject in that category will respond in a particular way to a particular kind of advertisement.
Whatever the domain of activity and whatever the norms, watchers use accounts to make decisions about the watched. Information revealed in the name of transparency may be used by citizens to decide whether or not to vote for a public official in the next election. Security officials use information in an individualâs files in deciding whether or not to stop the individual at an airport check-in point. Of course, the decision made depends on what is learned about the watched. Often the decisions made by watchers engaged in surveillance or after reading accounts produced in the name of transparency seem to involve no decision at all, but these are effectively decisions. For example, in the case of a traveler whose name does not match any on the terrorist watch list and whose file does not generate any other flag of concern, the decision is made, in effect, to let the person board the plane. Similarly, the elected official who makes her income tax filing available to the press may be reelected without much fanfare if her constituents find nothing unusual in the filings.
In treating surveillance and transparency as parallel systems, this volume works against the grain of current trends. Surveillance scholarship is increasingly seen as a field of its own, and this body of work has evolved from the social control and the privacy literatures. Surveillance studies might be said to take as their subject matter the practices of those who do the watching, while privacy studies focus on the situation of those who are being watched and especially the effects of the watching on the watched. Surveillance studies focus on institutionalized practices in which data about individuals are gathered, sorted, and used, with or without the subjectsâ knowledge or consent. Surveillance studies are increasingly seen as a better way to get a handle on privacy issues because attention is focused on institutional practicesâsocial sorting, norms, decision makingârather than on individuals, the threat to their interests, and the elusive notion of an individual ârightâ to privacy (Lyon 2001; Bennett 2011; Regan 2011; Gilliom 2011).
By contrast, transparencyâas a scholarly topicâhas been of interest primarily to political scientists and public administration scholars who are concerned with government accountability. Transparency systems are generally understood to be systems in which government agencies, corporations, and (less frequently) individuals reveal information about themselves in the name of accountability to others, such as constituents, stockholders, or the public. Data about the subject are intentionally provided by the subject. In the context of government, transparency is seen as an essential component of democratic government; in corporate contexts, transparency practices are seen as essential to functioning markets (i.e., consumers need information to make enlightened choices) and to civil society, since corporate activities can create risks to civil society.
Not only have surveillance studies and transparency studies been separate from one another, as mentioned earlier, but transparency is often seen as the solution to surveillance. The literature on surveillance is rife with suggestions about countering the negative effects of surveillance by requiring those who gather information to make their activities transparent to those being surveilled (Lyon 2007: 181â83). Danna and Gandy, for example, have argued that data-mining companies should simply inform the public of their activities so that the âbright light of publicityâ might regulate their activities (Danna and Gandy 2002: 384). Others have argued that transparency might be a remedy for addressing the injustice of government data-mining efforts (Rubinstein, Lee, and Schwartz 2008). Weitzner questions the utility of simple ânotice and consentâ transparency policies for Google, favoring instead a more inclusive transparency system in which Google discloses its surveillance tactics to groups of outside experts for evaluation (Weitzner 2007). Indeed, âtransparencyâ is one of the key concepts in the statement of Fair Information Principles put forth by the Organization for Economic Cooperation and Development (OECD).
Whatever the reasons for keeping surveillance studies and the transparency literature separate, they are brought together by the recognition that in both kinds of systems, there are watchers, those who are watched, and accounts used to make decisions about the watched. Moreover, both are generally thought toâeven intended toâshape the behavior of the watched. That is, the rationales for both surveillance and transparency systems generally involve some sort of presumption about how the watching will affect the watched. For example, in many systems involving financial transparency such as campaign finance disclosure, the presumption is officials will be less corrupt because they have to reveal what they are doing. Similarly in the classic panoptic prison, the presumption is that prisoners will adjust their behavior to fit the expectations of the guards in the guard tower. Interestingly, some contemporary surveillance seems to go counter to this presumption; those who track the behavior of online consumers, for example, want them simply to behave unfettered by any awareness that they are being watched so that the watchers can better decipher what consumers want. For example, Google wants its customers to reveal their preferences so that they can identify how to provide better search results. Just how watching affects or should affect the behavior of the watched is a complicated matter.
Accountability
In addition to involving watchers, watched, and accounts, surveillance and transparency can be brought together under the same lens by recognizing they are both systems of accountability. Transparency is, of course, commonly viewed as a form or mechanism of accountability; surveillance is not. In transparency, watchers and watched are aware that accounts are produced, and there is the expectation that there will be consequences depending on what the account reveals. Modern surveillance systems are not as explicitly or intentionally presented as systems of accountability; that is, surveillance systems are more often than not presented as if they are designed to achieve some other public value. Airline passengers are monitored in the name of security; Google searches are tracked in the name of providing better search results; blood donors are scrutinized to ensure the safety of blood transfusions.
Minimally, surveillance is accountability in the sense that it involves the production of âaccountsâ of individuals or groups, but, more important, it involves âaccountsâ being used to make decisions about those who are observed and involves consequences of various kinds being meted out on the basis of the accounts. In being held accountable for their behavior, the subjects of surveillance are being judged and treated accordingly. Of course, it is not just punishment that is meted out in surveillance systems; the watched may be rewarded with special opportunities, such as a lower interest rate on a loan or a special offer (because the individual has âachievedâ a very high credit score), or decisions may be made to do nothing to a subject of surveillance.
Framing surveillance as accountability has the promise of new insights into surveillance. More often than not, oneâs behavior is observed and judged and consequences are meted out without oneâs knowledge that a âtrialâ was being held, without oneâs knowledge of the norms by which one is being evaluated, and without recourse, except of course if one experiences the consequence and takes the trouble to ferret out who has done the judging, what criteria were used in the judging, and what, if any, system of recourse there is. The obvious examples here are being turned down for a loan or being prevented from boarding an airplane.
Recognizing that surveillance involves accountability helps us to understand why individuals so often react negatively to surveillance. One is being held to account and judged in âtrialsâ that are effectively secret. Judgments are made in places and through processes that are inaccessible to those on trial and protected from public scrutiny. Arguing for a shift in the overarching metaphor used by privacy scholars, from George Orwellâs Big Brother to Kafkaâs The Trial, Solove (2001) begins to capture the idea that surveillance involves accountability. However, Solove does not dwell on the âtrialâ aspects of the metaphor. Instead, he emphasizes that the Trial metaphor captures the sense of powerlessness, vulnerability, and dehumanization âcreated by the assembly of dossiers of personal information where individuals lack any meaningful form of participation in the collection and use of their information.â That individuals are being held ...