Afterlives of Data
eBook - ePub

Afterlives of Data

Life and Debt under Capitalist Surveillance

  1. 218 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Afterlives of Data

Life and Debt under Capitalist Surveillance

About this book

What our health data tell American capitalism about our value—and how that controls our lives.

Afterlives of Data follows the curious and multiple lives that our data live once they escape our control. Mary F. E. Ebeling's ethnographic investigation showshow information about our health and the debt that we carry becomes biopolitical assets owned by healthcare providers, insurers, commercial data brokers, credit reporting companies, and platforms. By delving into the oceans of data built from everyday medical and debt traumas, Ebeling reveals how data about our lives come to affect our bodies and our life chances and to wholly define us.

Investigations into secretive data collection and breaches of privacy by the likes of Cambridge Analytica have piqued concerns among many Americans about exactly what is being done with their data. From credit bureaus and consumer data brokers like Equifax and Experian to the secretive military contractor Palantir, this massive industry has little regulatory oversight for health data and works to actively obscure how it profits from our data. In this book, Ebeling traces the health data—medical information extracted from patients'bodies—that aredigitized and repackaged into new data commodities that have afterlives in database lakes and oceans, algorithms, and statistical models used to score patients on their creditworthiness and riskiness. Critical and disturbing, Afterlives of Data examines how Americans'data about their health and their debt are used in the service of marketing and capitalist surveillance.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Afterlives of Data by Mary F.E. Ebeling in PDF and/or ePUB format, as well as other popular books in Social Sciences & Medical Law. We have over one million books available in our catalogue for you to explore.

Information

1

Tracing Life through Data

The guide explains how it’s programmed to transmit biosensory information, like heart-rate, medical needs, sleep patterns. “It will be your guardian, protector. It will bring good things to you.” . . .
In an earlier moment in her life she might have described him as a ghost, a spiritual manifestation of the past. But she knows better now; invisibility is a prison. “Haunting” is a quaint and faint manifestation of the tortured. . . . She knew there was just one way forward and she understood the cost: the facts of her interior, available for use in a public dataset, as part of some kind of game. Besides, she hadn’t made a fuss when he underwent his own erasure.
“Yes, I am a pawn. Can we please go now?”
—Jena Osman, Motion Studies
In the first few months of 2020, as the coronavirus pandemic spread across the United States, the data analytics firm Palantir Technologies won a contract with the US Department of Health and Human Services (HHS) to provide a COVID-19 contact-tracing platform called HHS Protect. Since its founding in 2003 by Silicon Valley libertarian entrepreneurs, Palantir has burnished its infamous reputation by developing platforms for military, police, and antiterrorism applications, such as Project Maven, which utilizes artificial intelligence in military drones. Palantir holds multiple contracts with the likes of the Pentagon, the Department of Homeland Security, and the Federal Bureau of Investigation (FBI). The company describes its core business as building software to support data-driven decision-making and operations. Alex Karp, founder and CEO, has characterized its platforms as clandestine services, where company products are “used on occasion to kill people. . . . If you’re looking for a terrorist in the world now you’re probably using our government product and you’re probably doing the operation that takes out the person in another product we build.”1 The data-mining company is probably best known for its work for Immigration and Customs Enforcement (ICE) to develop the controversial Investigative Case Management system. Palantir’s forty-one million dollar contract for its data surveillance technology has enabled ICE to intensify its detention raids on targeted communities, with serious consequences. The stepped-up raids have terrorized migrants and accelerated family separations; they increased undocumented immigrant deportations by tenfold in 2018 alone (Mijente 2019).2 When the news broke amid the pandemic that the company had secured the HHS COVID-19 contact-tracing contract, many, especially those in the human rights and immigrants’ rights communities, voiced concerns about the data privacy and security protections on the platform.3 What data would Palantir collect, who exactly would have access to them, could the data be shared, and with whom? Could ICE use HHS Protect to come after undocumented migrants, also among the groups most at risk of contracting the coronavirus?
In a public-relations bid to assuage these concerns for a general audience, Alex Karp agreed to be interviewed by Axios journalist Mike Allen for its cable news program.4 In the online interview, Karp, who was quarantining in his rural New Hampshire home, preempted Allen’s questions concerning the public’s fears about Palantir’s work on a coronavirus contact-tracing tool. Karp focused on what he knew ordinary people were afraid of and what they would want to know:
Where do you get the data? How long do you keep it? Where is it kept? Can it be removed? Is it being used to monetize me? How can I [be] guaranteed this not being repurposed for people to understand my personal life, my political views, my work habits outside of my work environment?
Allen responded, somewhat ironically, that Karp had done his job as a journalist for him. Karp’s rhetorical interview questions also touch on the concerns that recur throughout this book. Who is collecting data about us? How and why do they collect our data, and who do they share it with? How is personal data used to make money for others? What stories do data tell about us? Why do we trust these data narratives, especially when these become the “truths” that increasingly define us? How and when did data become the facts of our lives?

TRACING THE INVISIBLE

This chapter opened with an epigraph from poet Jena Osman’s Motion Studies, an essay poem comprised of several concurrent narratives in conversation with one another that revolve around visibility, the desire to disappear, and the impossibility of escape once captured and categorized by the machines of scientific inquiry and the political economy built around digital data. The first narrative is a speculative story featuring two characters, a woman and a man, who have won, by lottery, the right to be forgotten, to jump off the digital grid and “disappear beyond the company’s horizon” (2019, 19). But much as, in Roald Dahl’s Charlie and the Chocolate Factory (1964), the golden ticket wins Charlie only the chance to prove his worthiness to Willy Wonka, the lottery ticket offers Osman’s protagonists only the right to run a competitive race through the desert toward digital oblivion, against other “lucky” lottery winners. The race takes the contestants ever-closer to the infinite horizon of anonymity, and in exchange, as they run, they’re tracked: their movements mapped out, and their heart rates, breath, and other biometric data collected, analyzed, shared, and stored for eternity. Before they can cash in their lottery ticket, both must undergo a procedure that leaves the woman with a physical body, visible and opaque, and her partner, the man, with a body as solid as the woman’s, yet transparent and invisible to human eyes (but legible to computer vision). The woman—in her opacity—realizes that even in their attempt to jump off the grid of visibility, her partner’s transparent body “lives more as a trace, a clue, data” but that both bodies serve as a dialectical contrast to the other, making them both legible to the corporation’s gaze (2019, 17).
The poem’s second entwined narrative concerns Étienne-Jules Marey, a nineteenth-century French inventor and early photography experimentalist who was obsessed with how to make visible the body’s inner, invisible movements, like the heart pumping blood through the body’s complex network of arteries and veins. Marey made the body’s invisible movements graphically legible through a sphygmograph, a machine he invented that traces the pulse onto a piece of paper. All of Marey’s inventions visualized the unseen into graphic traces, images, and lines. These inventions also included cameras, early cinematic prototypes, and chronophotography, which sequentially captured the movement of air and fluids, the flight of birds, and the galloping of horses. Some of Marey’s experiments that attempted to capture as visual information the life-sustaining movements and processes that occur inside bodies became the basis of technologies used today in seemingly disparate contexts of institutionalized power. One such technology is the sphygmomanometer, which measures blood pressure. Another device that owes a lot to Marey is the polygraph machine, or the “lie detector test.” Law enforcement and other security fields in the United States still administer this test to job applicants, although the test is no longer admissible as evidence in court, as the test’s claim that a subject’s change in heart rate or skin conductivity indicates a falsehood has been debunked.5 Yet as historians Lillian Daston and Peter Galison noted, Marey argued that to use mechanically generated depictions of phenomena was to speak in the “language of the phenomena themselves,” and through the removal of the human, or at least of human intervention, the tendency is to let the machine take over and enable nature to speak for itself (1992, 81).

WHAT COUNTS AS DATA, WHAT DATA BECOME FACT?

Wittgenstein observed that “the world is the totality of facts, not of things” (2010, 25). How is something that is unquantifiable made to count, and to count as “fact”? When asked what constitutes data, many data scientists respond that data are information that help to anchor facts or to get to “the truth” (Leonelli 2016a). Data are abstracted qualitative information that are objectified as quantifiable values, which are then used as evidence of a phenomenon or process. It is in this way that data are made into facts that are used to help ground or reveal a truth about a phenomenon. But this of course is all contingent, shaped by the sociopolitical contexts of where and how data are extracted, who is building the models and algorithms, and how the data will be put to use.
Mary Poovey, a cultural and economic historian, writes about how the modern fact rooted in numbers and measures—data—was born in the early Renaissance. The modern fact’s midwives were the European merchants and burgeoning capitalists tracking their wealth, profits gleaned from colonial empire-building (through expropriation and slavery) in the Americas and Asia—and keeping it separate from the Church (Poovey 1998). Poovey trains a meticulous eye on the rise of double-entry accounting as it developed in fifteenth-century Italy, adapted from Indian and Jewish traders who pioneered the method. It seems to be no accident that this form of accounting coincided with the Western powers’ early colonization in the Americas as well as with the development of early Enlightenment knowledge production. Within her analysis, she shows how bringing knowledge about one’s possessions, outstanding loans and debts, and transactions together into a ledger created connections that were at once both narrative and numerical. The ledger book was an early rhetorical attempt to make epistemic and factual statements about the world, separate from the authority of God or the Church. The doubleentry accounting system was a way to confer social authority to numbers, to make numbers both expose truth and bear facts, even if the numbers were invented: “For late sixteenth-century readers, the balance conjured up both the scales of justice and the symmetry of God’s world” (1998, 54).
By rhetorically making the numbers of the ledger book resemble or refer to the balance of God’s order, rather than to witchcraft or sorcery, these early capitalists made numbers into facts. According to Poovey, this moment was the necessary societal shift in the West that gave numbers legitimacy and bestowed them with the authority to say something about the nature of things, about the “real world.” It is all the more significant that these early capitalists used fictitious numbers to prove that the new accounting system was valid. In Poovey’s account, the social and political legitimacy of the double-entry ledger book coincided with the rise of knowledge-making through the documentation and measurement of observable phenomena, another way of making numbers into facts. In the five hundred years since Italian merchants adapted double-entry book-keeping, we have seen scientific inquiry, revolutions and turmoil, slavery, and expropriation of labor, land, natural, and human resources all contribute to the making of the “modern” world and the construction of data as facts. But the process of objectifying phenomena into “data facts” necessarily involves power: power over the conditions of the extraction of the raw materials, and power over the collection, processing, analysis, and packaging of what becomes data.
Data might be understood as something that can be measured, counted, or defined by numbers or statistics—the temperature of the air, the pH level of a soil sample, or the number of people of a certain age who live within a neighborhood. For those who work in the data-based economy, data can be almost anything—the grainy surveillance camera image, the remaining charge on a phone’s battery, or the blood glucose level measured over a three-month period—that can be captured and transformed into digital information. In other words, data can be any qualitative measure translatable into numbers. Once digitized, data can be transmitted, shared, and operationalized in many forms, among them spreadsheets, databases, and platforms. Surveillance cameras, credit card swipes, retail loyalty cards, and phone metadata all capture in a variety of ways what we presume to be our untraceable and fleeting actions—spending idle moments on a sidewalk, gazing into a storefront, walking into a pharmacy, or browsing the aisles of a store—and count them as data. Examining the commercial applications that track and capture consumer data, Shoshana Zuboff (2019) detailed the process of converting the intangible—behaviors and feelings—into finite, computer-readable data; these discrete inputs are fed into algorithmic models to either predict or drive future consumer behaviors.
That data and the algorithmic processes used to analyze them are neutral, unmediated, and unbiased statements on reality—that data are fact— is a persuasive and persistent notion, even in fields like medicine that rely on human interpretation. In the university population health informatics laboratory where I was based during fieldwork for this book, physicians would seek out Theresa, the head of the lab, to collaborate on research that utilizes artificial intelligence (AI) analytical techniques. In one case, an OBGYN wanted to use the lab’s expertise in artificial intelligence systems and deep learning techniques to analyze focus group data she had collected from mothers who experienced trauma in childbirth. The practitioner had the expressed belief that such methods, because they removed the “human,” would have less “bias” and the data could speak for themselves. In another case outside Theresa’s population health informatics lab, emergency medicine and medical AI researchers at University of California, Berkeley turned to deep-learning algorithms to read and interpret knee x-rays in order to override a persistent medical bias in which doctors underestimate the pain experienced by patients from underserved populations, such as ethnic minorities, women, or poor people (Pierson et al. 2021). In this study, the algorithm measured underlying osteoarthritic damage to the knee joints to predict the pain severity a patient experienced, with an accuracy rate almost five times better than that of the radiologists who were interpreting the x-rays. One of the study’s authors, Ziad Obermeyer, when asked in an interview about building AI models to support clinical decision-making, responded: “Do we train the algorithm to listen to the doctor, and potentially replicate decades of bias built into medical knowledge . . . or do we train it to listen to the patient and represent underserved patients’ experiences of pain accurately and make their pain visible?”6
Ethicists of artificial intelligence Alexander Campolo and Kate Crawford (2020) call this dynamic “enchanted determinism,” where magical thinking helps to rationalize a faith that AI is completely free of human bias. Many believe that AI uncovers the truth that data hold, rather than that data merely anchor a truth. But as Crawford notes in Atlas of AI (2021), nothing in AI computing is artificial or intelligent; rather, AI and the subjects (models, analyses, and so forth) it produces materially embody the biopolitical. “AI systems are not autonomous, rational or able to discern anything without extensive, computational intensive training” by humans (Crawford 2021, 8). Powerful, political-corporate interests build these data- and capital-intensive AI systems, and as such, the analytical results that AI systems produce become a registry of that power. From the lithium extracted from conflict-encumbered countries for computer batteries, to the “dirty” or inaccurate and biased data that are mined and fed into machine learning algorithms, AI is made from massive amounts of natural resources and human labor (and human misery), all of which remain invisible. Notwithstanding Obermeyer’s and his coauthors’ recognition that it is not enough to simply acknowledge the medical system’s biases, AI models must be built to account for that bias as well as for the sociopolitical at the level of the technical. Accountability is far from the field’s norm, and it is often actively resisted (Raji and Buolamwini 2019). After Timnit Gebru, former cohead of Google’s Ethical AI team, published findings of the bias baked into many of the company’s products, along with her public crit...

Table of contents

  1. Title
  2. Copyright
  3. Dedication
  4. Contents
  5. Acknowledgments
  6. Introduction: Data Lives On
  7. 1.   Tracing Life through Data
  8. 2.   Building Trust Where Data Divides
  9. 3.   Collecting Life
  10. 4.   Mobilizing Alternative Data
  11. 5.   On Scoring Life
  12. 6.   Data Visibilities
  13. Epilogue: Afterlife
  14. Notes
  15. References
  16. Index