Race After Technology
eBook - ePub

Race After Technology

Abolitionist Tools for the New Jim Code

Ruha Benjamin

Condividi libro
  1. English
  2. ePUB (disponibile sull'app)
  3. Disponibile su iOS e Android
eBook - ePub

Race After Technology

Abolitionist Tools for the New Jim Code

Ruha Benjamin

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to understand how emerging technologies can reinforce White supremacy and deepen social inequity.

Benjamin argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the "New Jim Code, " she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. Moreover, she makes a compelling case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.

This illuminating guide provides conceptual tools for decoding tech promises with sociologically informed skepticism. In doing so, it challenges us to question not only the technologies we are sold but also the ones we ourselves manufacture.

Visit the book's free Discussion Guide: www.dropbox.com

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Race After Technology è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Race After Technology di Ruha Benjamin in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Ciencias sociales e Demografía. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Editore
Polity
Anno
2019
ISBN
9781509526437
Edizione
1
Categoria
Demografía

1
Engineered Inequity
Are Robots Racist?

WELCOME TO THE FIRST INTERNATIONAL BEAUTY CONTEST JUDGED BY ARTIFICIAL INTELLIGENCE.
So goes the cheery announcement for Beauty AI, an initiative developed by the Australian- and Hong Kongbased organization Youth Laboratories in conjunction with a number of companies who worked together to stage the first ever beauty contest judged by robots (Figure 1.1).1 The venture involved a few seemingly straightforward steps:
  1. Contestants download the Beauty AI app.
  2. Contestants make a selfie.
  3. Robot jury examines all the photos.
  4. Robot jury chooses a king and a queen.
  5. News spreads around the world.
As for the rules, participants were not allowed to wear makeup or glasses or to don a beard. Robot judges were programmed to assess contestants on the basis of wrinkles, face symmetry, skin color, gender, age group, ethnicity, and “many other parameters.” Over 6,000 submissions from approximately 100 countries poured in. What could possibly go wrong?
Beauty AI
Figure 1.1 Beauty AI
Source: http://beauty.ai
On August 2, 2016, the creators of Beauty AI expressed dismay at the fact that “the robots did not like people with dark skin.” All 44 winners across the various age groups except six were White, and “only one finalist had visibly dark skin.”2 The contest used what was considered at the time the most advanced machine-learning technology available. Called “deep learning,” the software is trained to code beauty using pre-labeled images, then the images of contestants are judged against the algorithm’s embedded preferences.3 Beauty, in short, is in the trained eye of the algorithm.
As one report about the contest put it, “[t]he simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.”4 Columbia University professor Bernard Harcourt remarked: “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.” Beauty AI is a reminder, Harcourt notes, that humans are really doing the thinking, even when “we think it’s neutral and scientific.”5 And it is not just the human programmers’ preference for Whiteness that is encoded, but the combined preferences of all the humans whose data are studied by machines as they learn to judge beauty and, as it turns out, health.
In addition to the skewed racial results, the framing of Beauty AI as a kind of preventative public health initiative raises the stakes considerably. The team of biogerontologists and data scientists working with Beauty AI explained that valuable information about people’s health can be gleaned by “just processing their photos” and that, ultimately, the hope is to “find effective ways to slow down ageing and help people look healthy and beautiful.”6 Given the overwhelming Whiteness of the winners and the conflation of socially biased notions of beauty and health, darker people are implicitly coded as unhealthy and unfit – assumptions that are at the heart of scientific racism and eugenic ideology and policies.
Deep learning is a subfield of machine learning in which “depth” refers to the layers of abstraction that a computer program makes, learning more “complicated concepts by building them out of simpler ones.”7 With Beauty AI, deep learning was applied to image recognition; but it is also a method used for speech recognition, natural language processing, video game and board game programs, and even medical diagnosis. Social media filtering is the most common example of deep learning at work, as when Facebook auto-tags your photos with friends’ names or apps that decide which news and advertisements to show you to increase the chances that you’ll click. Within machine learning there is a distinction between “supervised” and “unsupervised” learning. Beauty AI was supervised, because the images used as training data were pre-labeled, whereas unsupervised deep learning uses data with very few labels. Mark Zuckerberg refers to deep learning as “the theory of the mind … How do we model – in machines – what human users are interested in and are going to do?”8 But the question for us is, is there only one theory of the mind, and whose mind is it modeled on?
It may be tempting to write off Beauty AI as an inane experiment or harmless vanity project, an unfortunate glitch in the otherwise neutral development of technology for the common good. But, as explored in the pages ahead, such a conclusion is naïve at best. Robots exemplify how race is a form of technology itself, as the algorithmic judgments of Beauty AI extend well beyond adjudicating attractiveness and into questions of health, intelligence, criminality, employment, and many other fields, in which innovative techniques give rise to newfangled forms of racial discrimination. Almost every day a new headline sounds the alarm, alerting us to the New Jim Code:
“Some algorithms are racist”
“We have a problem: Racist and sexist robots”
“Robots aren’t sexist and racist, you are”
“Robotic racists: AI technologies could inherit their creators’ biases”
Racist robots, as I invoke them here, represent a much broader process: social bias embedded in technical artifacts, the allure of objectivity without public accountability. Race as a form of technology – the sorting, establishment and enforcement of racial hierarchies with real consequences – is embodied in robots, which are often presented as simultaneously akin to humans but different and at times superior in terms of efficiency and regulation of bias. Yet the way robots can be racist often remains a mystery or is purposefully hidden from public view.
Consider that machine-learning systems, in particular, allow officials to outsource decisions that are (or should be) the purview of democratic oversight. Even when public agencies are employing such systems, private companies are the ones developing them, thereby acting like political entities but with none of the checks and balances. They are, in the words of one observer, “governing without a mandate,” which means that people whose lives are being shaped in ever more consequential ways by automated decisions have very little say in how they are governed.9
For example, in Automated Inequality Virginia Eubanks (2018) documents the steady incorporation of predictive analytics by US social welfare agencies. Among other promises, automated decisions aim to mitigate fraud by depersonalizing the process and by determining who is eligible for benefits.10 But, as she documents, these technical fixes, often promoted as benefiting society, end up hurting the most vulnerable, sometimes with deadly results. Her point is not that human caseworkers are less biased than machines – there are, after all, numerous studies showing how caseworkers actively discriminate against racialized groups while aiding White applicants deemed more deserving.11 Rather, as Eubanks emphasizes, automated welfare decisions are not magically fairer than their human counterparts. Discrimination is displaced and accountability is outsourced in this postdemocratic approach to governing social life.12
So, how do we rethink our relationship to technology? The answer partly lies in how we think about race itself and specifically the issues of intentionality and visibility.

I Tinker, Therefore I Am

Humans are toolmakers. And robots, we might say, are humanity’s finest handiwork. In popular culture, robots are typically portrayed as humanoids, more efficient and less sentimental than Homo sapiens. At times, robots are depicted as having human-like struggles, wrestling with emotions and an awakening consciousness that blurs the line between maker and made. Studies about how humans perceive robots indicate that, when that line becomes too blurred, it tends to freak people out. The technical term for it is the “uncanny valley” – which indicates the dip in empathy and increase in revulsion that people experience when a robot appears ...

Indice dei contenuti