Race After Technology
eBook - ePub

Race After Technology

Abolitionist Tools for the New Jim Code

Ruha Benjamin

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Race After Technology

Abolitionist Tools for the New Jim Code

Ruha Benjamin

Book details
Book preview
Table of contents
Citations

About This Book

From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to understand how emerging technologies can reinforce White supremacy and deepen social inequity.

Benjamin argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the "New Jim Code, " she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. Moreover, she makes a compelling case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.

This illuminating guide provides conceptual tools for decoding tech promises with sociologically informed skepticism. In doing so, it challenges us to question not only the technologies we are sold but also the ones we ourselves manufacture.

Visit the book's free Discussion Guide: www.dropbox.com

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Race After Technology an online PDF/ePUB?
Yes, you can access Race After Technology by Ruha Benjamin in PDF and/or ePUB format, as well as other popular books in Ciencias sociales & Demografía. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Polity
Year
2019
ISBN
9781509526437
Edition
1
Subtopic
Demografía

1
Engineered Inequity
Are Robots Racist?

WELCOME TO THE FIRST INTERNATIONAL BEAUTY CONTEST JUDGED BY ARTIFICIAL INTELLIGENCE.
So goes the cheery announcement for Beauty AI, an initiative developed by the Australian- and Hong Kongbased organization Youth Laboratories in conjunction with a number of companies who worked together to stage the first ever beauty contest judged by robots (Figure 1.1).1 The venture involved a few seemingly straightforward steps:
  1. Contestants download the Beauty AI app.
  2. Contestants make a selfie.
  3. Robot jury examines all the photos.
  4. Robot jury chooses a king and a queen.
  5. News spreads around the world.
As for the rules, participants were not allowed to wear makeup or glasses or to don a beard. Robot judges were programmed to assess contestants on the basis of wrinkles, face symmetry, skin color, gender, age group, ethnicity, and “many other parameters.” Over 6,000 submissions from approximately 100 countries poured in. What could possibly go wrong?
Beauty AI
Figure 1.1 Beauty AI
Source: http://beauty.ai
On August 2, 2016, the creators of Beauty AI expressed dismay at the fact that “the robots did not like people with dark skin.” All 44 winners across the various age groups except six were White, and “only one finalist had visibly dark skin.”2 The contest used what was considered at the time the most advanced machine-learning technology available. Called “deep learning,” the software is trained to code beauty using pre-labeled images, then the images of contestants are judged against the algorithm’s embedded preferences.3 Beauty, in short, is in the trained eye of the algorithm.
As one report about the contest put it, “[t]he simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.”4 Columbia University professor Bernard Harcourt remarked: “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.” Beauty AI is a reminder, Harcourt notes, that humans are really doing the thinking, even when “we think it’s neutral and scientific.”5 And it is not just the human programmers’ preference for Whiteness that is encoded, but the combined preferences of all the humans whose data are studied by machines as they learn to judge beauty and, as it turns out, health.
In addition to the skewed racial results, the framing of Beauty AI as a kind of preventative public health initiative raises the stakes considerably. The team of biogerontologists and data scientists working with Beauty AI explained that valuable information about people’s health can be gleaned by “just processing their photos” and that, ultimately, the hope is to “find effective ways to slow down ageing and help people look healthy and beautiful.”6 Given the overwhelming Whiteness of the winners and the conflation of socially biased notions of beauty and health, darker people are implicitly coded as unhealthy and unfit – assumptions that are at the heart of scientific racism and eugenic ideology and policies.
Deep learning is a subfield of machine learning in which “depth” refers to the layers of abstraction that a computer program makes, learning more “complicated concepts by building them out of simpler ones.”7 With Beauty AI, deep learning was applied to image recognition; but it is also a method used for speech recognition, natural language processing, video game and board game programs, and even medical diagnosis. Social media filtering is the most common example of deep learning at work, as when Facebook auto-tags your photos with friends’ names or apps that decide which news and advertisements to show you to increase the chances that you’ll click. Within machine learning there is a distinction between “supervised” and “unsupervised” learning. Beauty AI was supervised, because the images used as training data were pre-labeled, whereas unsupervised deep learning uses data with very few labels. Mark Zuckerberg refers to deep learning as “the theory of the mind … How do we model – in machines – what human users are interested in and are going to do?”8 But the question for us is, is there only one theory of the mind, and whose mind is it modeled on?
It may be tempting to write off Beauty AI as an inane experiment or harmless vanity project, an unfortunate glitch in the otherwise neutral development of technology for the common good. But, as explored in the pages ahead, such a conclusion is naïve at best. Robots exemplify how race is a form of technology itself, as the algorithmic judgments of Beauty AI extend well beyond adjudicating attractiveness and into questions of health, intelligence, criminality, employment, and many other fields, in which innovative techniques give rise to newfangled forms of racial discrimination. Almost every day a new headline sounds the alarm, alerting us to the New Jim Code:
“Some algorithms are racist”
“We have a problem: Racist and sexist robots”
“Robots aren’t sexist and racist, you are”
“Robotic racists: AI technologies could inherit their creators’ biases”
Racist robots, as I invoke them here, represent a much broader process: social bias embedded in technical artifacts, the allure of objectivity without public accountability. Race as a form of technology – the sorting, establishment and enforcement of racial hierarchies with real consequences – is embodied in robots, which are often presented as simultaneously akin to humans but different and at times superior in terms of efficiency and regulation of bias. Yet the way robots can be racist often remains a mystery or is purposefully hidden from public view.
Consider that machine-learning systems, in particular, allow officials to outsource decisions that are (or should be) the purview of democratic oversight. Even when public agencies are employing such systems, private companies are the ones developing them, thereby acting like political entities but with none of the checks and balances. They are, in the words of one observer, “governing without a mandate,” which means that people whose lives are being shaped in ever more consequential ways by automated decisions have very little say in how they are governed.9
For example, in Automated Inequality Virginia Eubanks (2018) documents the steady incorporation of predictive analytics by US social welfare agencies. Among other promises, automated decisions aim to mitigate fraud by depersonalizing the process and by determining who is eligible for benefits.10 But, as she documents, these technical fixes, often promoted as benefiting society, end up hurting the most vulnerable, sometimes with deadly results. Her point is not that human caseworkers are less biased than machines – there are, after all, numerous studies showing how caseworkers actively discriminate against racialized groups while aiding White applicants deemed more deserving.11 Rather, as Eubanks emphasizes, automated welfare decisions are not magically fairer than their human counterparts. Discrimination is displaced and accountability is outsourced in this postdemocratic approach to governing social life.12
So, how do we rethink our relationship to technology? The answer partly lies in how we think about race itself and specifically the issues of intentionality and visibility.

I Tinker, Therefore I Am

Humans are toolmakers. And robots, we might say, are humanity’s finest handiwork. In popular culture, robots are typically portrayed as humanoids, more efficient and less sentimental than Homo sapiens. At times, robots are depicted as having human-like struggles, wrestling with emotions and an awakening consciousness that blurs the line between maker and made. Studies about how humans perceive robots indicate that, when that line becomes too blurred, it tends to freak people out. The technical term for it is the “uncanny valley” – which indicates the dip in empathy and increase in revulsion that people experience when a robot appears ...

Table of contents