I wished to write a comedy, partly of science, partly of truth. The odd inventor, Mr. Rossum (whose name translated into English signifies “Mr. Intellectual” or “Mr. Brain”), is a typical representative of the scientific materialism of the last century. His desire to create an artificial man—in the chemical and biological, not the mechanical sense—is inspired by a foolish and obstinate wish to prove God unnecessary and absurd. Young Rossum is the young scientist, untroubled by metaphysical ideas; scientific experiment to him is the road to industrial production. He is not concerned to prove but to manufacture …. Those who think to master the industry are themselves mastered by it; Robots must be produced although they are a war industry, or rather because they are a war industry. (Karel Čapek 1923)
The nineteenth-century optimism about science and technology started to dim after the catastrophic
Great War. The brutal machinery of modern warfare and the dehumanizing mechanization of life were the subjects of Karel
Cˇapek’s 1920s play
R.U.R .
(Rossum’s Universal Robots), where the term robot was first coined, derived from the Czech word
robota,
meaning servitude or forced
labor. The development of the atomic bomb during the
Second World War did nothing to allay fears about the abuses of technology and the view that science had outpaced humanity. Optimism surged briefly in the 1990s with the hope that the Internet would connect the world and foster democracy, but that dream quickly soured as corporations like
Facebook,
Google, and
Amazon took over the online world, and powerful government agencies began spying on their citizens. China, for instance, has been explicit about its embrace of algorithmic governance and mass surveillance and plans to implement a rating system for its citizens enabled by image and voice recognition technologies. Many of the early inventors and enthusiasts of the web, Tim
Berners-Lee,
Jaron Lanier, and
Sherry Turkle, among them, are profoundly disturbed by the toxic turn of the Internet and online culture, which is only heating up in the race amongst superpowers to dominate the field of artificial intelligence (AI) technology that has been completely uncoupled from any sense of social progress. In an interview, the historian Jill
Lepore discusses the contemporary tech world and notes:
Disruption emerges in the 1990s as progress without any obligation to notions of goodness. And so ‘disruptive’ innovation, which became the buzzword of change in every realm in the first years of the 21st century, including higher education, is basically destroying things because we can and because there can be money made doing so. Before the 1990s, something that was disruptive was like the kid in the class throwing chalk. And that’s what disruptive innovation turned out to really mean. A little less disruptive innovation is called for. (Goldstein 2018)
Discussions about AI and robotics have largely been shaped by computer science and engineering, steered by corporate and military interests, often underscored by transhumanist philosophy and libertarian politics, animated by fiction, and hyped by the media. One of the issues that makes headlines and dominates popular discussions is how to ensure this technology proves miraculous rather than catastrophic for humanity. Interconnected boards and institutes have sprung up to guide its development: Google has promised to establish an ethics board (though it has not disclosed its members), and the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence) in Silicon Valley; the Future of Humanity Institute at Oxford; the Centre for the Study of Existential Risk at Cambridge, England; and the Future of Life Institute at Cambridge, Massachusetts—have all been established to track the “existential risk” AI poses.
Yet many of the founders and funders that dominate these institutes and boards (such as Ray Kurzweil, Nick Bostrom, and Max Tegmark) are transhumanists—those who believe in the radical “uplifting” of the human condition through technological enhancement; the inevitability of strong or “general” AI; the emergence of an autonomous machine superintelligence that will exceed human intelligence; the technological singularity, the merging of man and machine; and immortality. In other words, we have a classic case of the foxes guarding the chicken coop: those overseeing the technology to prevent it from going rogue are also those heavily invested in the idea of its potential for dramatically transforming humanity for the better. However, many of the underlying premises that drive the research remain, for the most part, unexamined: it proceeds with a religious-like faith in technology; it assumes that brains and computers, the biological and the machinic, the technological and the natural are interchangeable; it privileges instrumental reason and algorithmic logic, big data , computing speeds, and efficiency; it promotes itself as a future that will radically break from the past but never imagines itself as an archeological relic; it positions humans as autonomous machinelike managers and masters of the cosmos rather than animals that are dependent on the planet.
Moreover, the question of whether at some future point this technology will prove a threat or a blessing distracts us from the infrastructure currently impacting virtually all domains of life—the economic, political, social, cultural, and environmental. Big data results and propriety black box solutions replicate biases. Social media platforms that harvest and monetize personal data, steal private data and turn it into proprietary knowledge, and circulate sensationalist clickbait headlines and propaganda to targeted audiences have violated privacy, sucked up advertisement revenue, eroded the independent press, and encouraged trolls and tribalism, which in turn have led to the degradation of political discussions and the undermining of democracy. In the rush “forward,” older computer languages and technologies are rendered obsolete by new ones even as many critical systems are structured by the former. The military, which funds a great deal of the research, is developing highly controversial autonomous weapons, hoping to automate war , despite vocal protests from many in the field. Big corporations, like Google, Amazon, Apple, and Facebook that are heavily investing in AI and robotics, are virtual monopolies. Contributing to wealth concentration, they avoid taxes even as they benefit from publically-funded infrastructure and hire relatively few people, all the while making record profits. The short shelf life of electronic devices, with their built-in obsolescence, is exhausting natural resources and contaminating landfills with millions of tons of toxic e-waste, a problem that will continue to grow with the industry push for a robotic labor force. These problems are only the tip of the iceberg that we are already experiencing in the blind race for AI and robots. While much of AI/robotics research takes place within disciplinary silos, the impact of AI/robotics is wide-ranging and needs to be considered in a much broader transdisciplinary context.
John McCarthy first coined the term “artificial intelligence” in 1956 at the Dartmouth Summer Research Project workshop on machine computation. The purpose was to explore “the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” What was conjecture has now become gospel for many despite the fact that McCarthy himself regretted coining the term. He wished he had named the field “computational intelligence,” which perhaps would have better checked the almost automatic conflation of the concepts of the “human” and the “machine” that is so prevalent and problematic in discussions of AI. If, from its inception, the field has been driven by the misleading premise that machines will master and exceed human intelligence, the term computation—“a calculation involving numbers or quantities”—better qualifies the very limited range of “intelligence” that the field in fact covers. At its core, AI is the attempt to get systems to act or think like animals/humans. In reality, it is a combination of math and engineering that involves using computer algorithms, a specific set of unambiguous instructions in a specific order, applied to information that has been turned into data (ones and zeros). In practice, the field is mostly concerned with building profitable artifacts and is unconcerned with abstract definitions of intelligence.
Many myths
have
distorted the field, making it difficult to assess the problems and potential of the technology. One:
references to
fiction often problematically structure the debates about AI and robotics. A 2015 European Parliament draft
report on the state of robotics begins:
from Mary Shelley’s Frankenstein’s Monster to the classical myth of Pygmalion , through the story of Prague’s Golem to the robot of Karel Čapek, who coined the word, people have fantasized about the possibility of building intelligent machines, more often than not androids with human features.
Framed by this fictional legacy, the report then proceeds to discuss the implications of the rise of robotics. The report is only one example of the almost ubiquitous conflation of
science and fiction
in the cultural imaginary that not only infects policy and research but also drives the robotics/AI industry. Claims that fiction is coming true demonstrate a fundamental lack of understanding of how fiction works and thoroughly obfuscate the field, clouding the science and neutering the critical force of fiction.
Two: throughout the field, animals and machines are often viewed as interchangeable. Discussions of the rights of robots often begin with slippery statements like “we must get tougher on technology abuse or it undermines laws about abuse of animals.” This problematic conflation of machines and animals dates back to Descartes, who argued animals are automatons and incapable of processing pain. But, quite simply, animals are not machines and Descartes’ view has long been dismissed. As industrialization is on the one hand invested in the rapid rise of robots and on the other causing the rapid extinction of species, it is all the more urgent that we resist this conflation. A battery-operated robotic “bee” can never replace bee populations, and the misleading argument that we can replicate animals that we have only a rudimentary understanding of puts all animals (including humans) in peril.
Three: corporations present robots and AI (i.e., Big Blue, Siri, Jibo, Pepper, Robi, Watson, Google’s search engine) as autonomous, but behind these “magic” machines are the humans producing the books, music, research, images, and maps that are the source of the machine “knowledge.” Take computer language translation, for instance, which, as Jaron Lanier points out, is only possible through the appropriation of the labor of millions of human translators, many of whom are now unemployed because of the technology. In a world of ever-increasing wealth disparity and precarious employment, the high pr...