Race After Technology
eBook - ePub

Race After Technology

Abolitionist Tools for the New Jim Code

Ruha Benjamin

Compartir libro
  1. English
  2. ePUB (apto para móviles)
  3. Disponible en iOS y Android
eBook - ePub

Race After Technology

Abolitionist Tools for the New Jim Code

Ruha Benjamin

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to understand how emerging technologies can reinforce White supremacy and deepen social inequity.

Benjamin argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the "New Jim Code, " she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. Moreover, she makes a compelling case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.

This illuminating guide provides conceptual tools for decoding tech promises with sociologically informed skepticism. In doing so, it challenges us to question not only the technologies we are sold but also the ones we ourselves manufacture.

Visit the book's free Discussion Guide: www.dropbox.com

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Race After Technology un PDF/ePUB en línea?
Sí, puedes acceder a Race After Technology de Ruha Benjamin en formato PDF o ePUB, así como a otros libros populares de Ciencias sociales y Demografía. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Editorial
Polity
Año
2019
ISBN
9781509526437
Edición
1
Categoría
Demografía

1
Engineered Inequity
Are Robots Racist?

WELCOME TO THE FIRST INTERNATIONAL BEAUTY CONTEST JUDGED BY ARTIFICIAL INTELLIGENCE.
So goes the cheery announcement for Beauty AI, an initiative developed by the Australian- and Hong Kongbased organization Youth Laboratories in conjunction with a number of companies who worked together to stage the first ever beauty contest judged by robots (Figure 1.1).1 The venture involved a few seemingly straightforward steps:
  1. Contestants download the Beauty AI app.
  2. Contestants make a selfie.
  3. Robot jury examines all the photos.
  4. Robot jury chooses a king and a queen.
  5. News spreads around the world.
As for the rules, participants were not allowed to wear makeup or glasses or to don a beard. Robot judges were programmed to assess contestants on the basis of wrinkles, face symmetry, skin color, gender, age group, ethnicity, and “many other parameters.” Over 6,000 submissions from approximately 100 countries poured in. What could possibly go wrong?
Beauty AI
Figure 1.1 Beauty AI
Source: http://beauty.ai
On August 2, 2016, the creators of Beauty AI expressed dismay at the fact that “the robots did not like people with dark skin.” All 44 winners across the various age groups except six were White, and “only one finalist had visibly dark skin.”2 The contest used what was considered at the time the most advanced machine-learning technology available. Called “deep learning,” the software is trained to code beauty using pre-labeled images, then the images of contestants are judged against the algorithm’s embedded preferences.3 Beauty, in short, is in the trained eye of the algorithm.
As one report about the contest put it, “[t]he simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.”4 Columbia University professor Bernard Harcourt remarked: “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.” Beauty AI is a reminder, Harcourt notes, that humans are really doing the thinking, even when “we think it’s neutral and scientific.”5 And it is not just the human programmers’ preference for Whiteness that is encoded, but the combined preferences of all the humans whose data are studied by machines as they learn to judge beauty and, as it turns out, health.
In addition to the skewed racial results, the framing of Beauty AI as a kind of preventative public health initiative raises the stakes considerably. The team of biogerontologists and data scientists working with Beauty AI explained that valuable information about people’s health can be gleaned by “just processing their photos” and that, ultimately, the hope is to “find effective ways to slow down ageing and help people look healthy and beautiful.”6 Given the overwhelming Whiteness of the winners and the conflation of socially biased notions of beauty and health, darker people are implicitly coded as unhealthy and unfit – assumptions that are at the heart of scientific racism and eugenic ideology and policies.
Deep learning is a subfield of machine learning in which “depth” refers to the layers of abstraction that a computer program makes, learning more “complicated concepts by building them out of simpler ones.”7 With Beauty AI, deep learning was applied to image recognition; but it is also a method used for speech recognition, natural language processing, video game and board game programs, and even medical diagnosis. Social media filtering is the most common example of deep learning at work, as when Facebook auto-tags your photos with friends’ names or apps that decide which news and advertisements to show you to increase the chances that you’ll click. Within machine learning there is a distinction between “supervised” and “unsupervised” learning. Beauty AI was supervised, because the images used as training data were pre-labeled, whereas unsupervised deep learning uses data with very few labels. Mark Zuckerberg refers to deep learning as “the theory of the mind … How do we model – in machines – what human users are interested in and are going to do?”8 But the question for us is, is there only one theory of the mind, and whose mind is it modeled on?
It may be tempting to write off Beauty AI as an inane experiment or harmless vanity project, an unfortunate glitch in the otherwise neutral development of technology for the common good. But, as explored in the pages ahead, such a conclusion is naïve at best. Robots exemplify how race is a form of technology itself, as the algorithmic judgments of Beauty AI extend well beyond adjudicating attractiveness and into questions of health, intelligence, criminality, employment, and many other fields, in which innovative techniques give rise to newfangled forms of racial discrimination. Almost every day a new headline sounds the alarm, alerting us to the New Jim Code:
“Some algorithms are racist”
“We have a problem: Racist and sexist robots”
“Robots aren’t sexist and racist, you are”
“Robotic racists: AI technologies could inherit their creators’ biases”
Racist robots, as I invoke them here, represent a much broader process: social bias embedded in technical artifacts, the allure of objectivity without public accountability. Race as a form of technology – the sorting, establishment and enforcement of racial hierarchies with real consequences – is embodied in robots, which are often presented as simultaneously akin to humans but different and at times superior in terms of efficiency and regulation of bias. Yet the way robots can be racist often remains a mystery or is purposefully hidden from public view.
Consider that machine-learning systems, in particular, allow officials to outsource decisions that are (or should be) the purview of democratic oversight. Even when public agencies are employing such systems, private companies are the ones developing them, thereby acting like political entities but with none of the checks and balances. They are, in the words of one observer, “governing without a mandate,” which means that people whose lives are being shaped in ever more consequential ways by automated decisions have very little say in how they are governed.9
For example, in Automated Inequality Virginia Eubanks (2018) documents the steady incorporation of predictive analytics by US social welfare agencies. Among other promises, automated decisions aim to mitigate fraud by depersonalizing the process and by determining who is eligible for benefits.10 But, as she documents, these technical fixes, often promoted as benefiting society, end up hurting the most vulnerable, sometimes with deadly results. Her point is not that human caseworkers are less biased than machines – there are, after all, numerous studies showing how caseworkers actively discriminate against racialized groups while aiding White applicants deemed more deserving.11 Rather, as Eubanks emphasizes, automated welfare decisions are not magically fairer than their human counterparts. Discrimination is displaced and accountability is outsourced in this postdemocratic approach to governing social life.12
So, how do we rethink our relationship to technology? The answer partly lies in how we think about race itself and specifically the issues of intentionality and visibility.

I Tinker, Therefore I Am

Humans are toolmakers. And robots, we might say, are humanity’s finest handiwork. In popular culture, robots are typically portrayed as humanoids, more efficient and less sentimental than Homo sapiens. At times, robots are depicted as having human-like struggles, wrestling with emotions and an awakening consciousness that blurs the line between maker and made. Studies about how humans perceive robots indicate that, when that line becomes too blurred, it tends to freak people out. The technical term for it is the “uncanny valley” – which indicates the dip in empathy and increase in revulsion that people experience when a robot appears ...

Índice