1 The Tool of Our Tools
âHe treats the world as a game.â IRL (âIn Real Lifeâ) Streamers broadcast their daily livesâall parts, good and bad, exceptional and mundane. Some have hundreds of thousands of followers. A New Yorker profile1 describes one prominent streamer. Armed with a smartphone and a selfie stick, he walks into a restaurant chosen at random. Soon his viewers are âswatting,â calling the restaurant with reports claiming that heâs a child molester or a terrorist with a bomb in his backpack. The nervous manager asks him to leave. Viewers then flood the restaurantâs Yelp reviews with low ratings. Streamer and audience move on to their next amusement.
1
Times certainly have changed. Behavior that once would have resulted in shunning or arrest has now become common. Of course, some of these changes are salutary; some not. The point, however, is the ways in which science and technology make these decisions for us. How have we arrived at this point? These pages trace this story.
This requires a dive into philosophy. Our social conditions today are in many ways unique, and the power of our technologies is unprecedented. Itâs a brave new world out there. Nonetheless, our circumstances have been mapped by dead philosophers. Hegel, for instance: he understood that there is a rhythm to events, that innovations cause rebound effects, and advances provoke their opposite. We are empowered by our technologies, but they also leave us debilitated. We are both aroused and overwhelmed by our inventions; our devices both augment and abolish our freedom.
Thoughtful people have identified an array of challenges facing society: food security, climate change, pandemics, overpopulation, weapons of mass destruction, collapse of the global financial market. They have labored tirelessly to devise solutionsâimproved crops, more efficient sources of power, better birth control and the empowerment of women, enhanced scanning of incoming cargo, better monitoring of stock activity. Make no mistake: these efforts have accomplished a great deal of good. But the solutions being offered are overwhelmingly technological in nature. Our passions are thought of as unmanageable; progress is defined by improving our tools rather than ourselves. This raises the danger noted by Thoreau: we may become the tool of our tools.
Transhumanists2 are the most toolish of all. They have grand aspirations for our future. They want to turn our scientific and technological powers back upon ourselves. But in their eagerness they skip over the negative aspects of their program. The reasons vary. Some transhumanists are insulated by talent, money, and status: even if others suffer, they will retain their survivalist mansions and New Zealand passports. For others, the desire is more millennialist: no sacrifice is too great to reach the promised land of the Singularity. And often itâs just too difficult to pay attention to possible dangers when life is so filled with wonderful opportunities.
Transhumanists, and the techno-optimists generally, have missed a crucial point. They havenât realized that Zuckerbergâs motto âmove fast and break thingsâ is a pleonasm.
2
Whether or not they are transhumanists, our most prominent scientists and engineers regularly promise a new dispensation for humanityâlonger life and heightened skills and pleasures. But listen again, and you can hear rumblings of unease. They emphasize the coming marvels, but when pressed theyâll also grant that technological advance might just snuff out the human race. Elon Musk and Steven Hawking warn of the dangers of artificial intelligence (AI), even while pushing things forward; James Barrat ponders whether AI will be our final invention. Others are troubled by advances in nanotechnology and genetic enhancement, or worry about do-it-yourself (DIY) microbiologists creating monsters in basement labs.
We will return to the IRL trolls and the DIY biohackers who inject themselves with their own genetic concoctions. For now, letâs focus on the mainstream voices, people like Gates and Hawking. Their views repeat the concerns once expressed by Bill Joyâbut without drawing Joyâs conclusion. Thus Hawking: âwe cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by itâ (Osborne 2017). But the fact that âwe cannot knowâ did not lead him to suggest that we should pause in our research. Joy is distinctive in that he followed his thinking to its logical conclusion. Sizing up the risks, he argued that we should âlimit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledgeâ (Joy 2000).
Joy is well-known in tech circles, and his essay was widely read, but few inside or outside of science have taken his suggestion seriously. In the years since he published his essay the growth of knowledge has accelerated, and the dangers of technological advance have increased. But this hasnât prompted discussions about slowing the growth of knowledge.
True, one can find a few vague pronouncements. The Future of Life Institute held the Asilomar Conference on Beneficial AI in 2017. They promulgated a set of 23 principles. The results, however, were pretty weak beer: âAI Arms Race: An arms race in lethal autonomous weapons should be avoided.â Well, yes! One finds little that is programmatic and policy-focusedâno senator or Washington think tank is arguing that we should freeze AI funding while we assess the risks, or declaring that DIY biology should be illegal. No international conference whose theme is whether it is time to call a halt to the Enlightenment, that sapere aude! has become too dangerous to pursue. These suggestions lie outside the Overton Window. On the contrary, everyone expects things to accelerate.
Not all the possibilities are dire. But even the non-lethal ones can be quite disorienting. Human brain tissue is now grown in dishes from stem cellsââbrain organoids.â Some wonder whether these organoids might come to haveâor perhaps even already haveâconscious experience. Other experiments involve the manufacturing of chimeras, the transplantation of human cells derived from pluripotent stem cells into the brains of mice. This research could lead to life-altering advances for those who suffer from neurological or psychiatric diseases. But it also threatens cultural norms and religious beliefs, and unsettles our sense of what it means to be human. Are we ready for the Patriotsâ next running back to have some percentage of gorilla DNA? Transhumanists speak with the wide-eyed fervor of old-time preachers, but their aspirations challenge cultural norms in unprecedented ways.
On rare occasions someone questions the endless production of knowledge. But usually the concern isnât with technoscientific knowledge at all but with the social sciences and the humanities. These fields are described as uselessâmeaning that they do not produce stuff. Or theyâre described as being positively obstructionist, meaning that they raise questions about the production of more stuff. But these fields are not as radical as all that. These fields also embrace infinityâthe ideology of infinite knowledge production, the norm of producing books and articles for a tiny cohort of like-minded specialists. It hasnât occurred to humanists that their task is fundamentally different from that of the sciences, that they ask questions rather than provide answers, and that the bulk of their work should be tied to awakening an appreciation of perennial issues rather than engaging in the discovery of new specialized truths.
Set the humanities to one side: the progress that people have in mind is technoscientific in nature. Try suggesting that we take a break from this, that a pause in development might give us a chance to catch our collective breath: you will be told that technological development is unstoppable. Even a temporary pause is impossible. The point isnât really argued; itâs axiomatic. You canât stop progress. This despite the fact that we have been able to stop technoscientific development when motivated toâthus the Outer Space Treaty, which banned weapons from space. (That was in 1967; in 2018, the Trump administration proposed the creation of a new military branch dedicated to fighting wars in space.) Nor, it seems, can we discuss the possible redefinition of progress. Everything is possible in terms of technology, while nothing is possible in terms of moderating our sensibilities and desires. The world is a bounty of resources open to manipulation, and the transhumanists now tell us, so are our bodies and minds. Improving our character isnât one of our options.
Hitchcock describes similar limits to conversation in Foreign Correspondent (1940). The movie is set in 1939; the International Peace Party is having a meeting to discuss the looming threat of World War II. Someone explains that the coming war involves circumstances over which we have no control. A member of the Peace Party replies:
Yes, those convenient circumstances over which we have no control. Itâs always odd, but they usually bring on a war. You never hear of circumstances over which weâve no control rushing us into peace, do you?
The determinist argument shuttles between the two poles of âcanâtâ and âshouldnât.â Under âcanât,â the pursuit of knowledge is treated as if it is written into our DNA, and the budget of the National Science Foundation constitutes a fourth law of motion. The point is also made in terms of political realities. Passing laws to restrain knowledge production is hopeless. Laws could forbid some types of research, but there will always be researchers and countries who will go rogue. (By this logic, we should also give up on outlawing murder.) At some point, the argument shifts to âshouldnât.â We have so many problems to solve; itâs not right to stop the pursuit of knowledge. Caught between canât and shouldnât, we accept our fate and wait expectantly for the wonders (or disasters) in the offing. In any case, thereâs no sense dwelling on negative possibilities if thereâs nothing to be done about them anyway.
This view is more than a pose but less than a thought-out conclusion; less a counsel of despair than an unexamined intuition and failure of will. Itâs time that we acknowledge that we possess agency here, too. Difficult, yes. Impossible, no. Long-held assumptions need to be challengedânot only of the goodness of more and more knowledge, and inevitability of ever more technology, but other beliefs as well: that knowledge is the sole way to address a problem, that self-rule and continued technological advance are compatible, and that technological convenience is an unambiguous good. This is to problematize issues that have been left for dead. But it is possible to turn our attention toward how to persuade people to be more humane and compassionate rather than simply stronger and smarter and loaded down with toys.
3
Foucault once imagined writing the history of thought in terms of how tacit assumptions become visible:
for a domain of action, a behavior, to enter the field of thought, it is necessary for a certain number of factors to have made it uncertain, to have made it lose its familiarity, or to have provoked a certain number of difficulties around it.
(Rabinow 1998, p. 388)
How is it that the largely laissez-faire production of knowledge is not viewed as a problem, at least potentially? That so few people raise questions about the continued acceleration of knowledge production, particularly in terms of technical know-how? That we hear warnings concerning the dangers of artificial intelligence, but this is not matched with calls to halt research in AI?
âProblematization,â or a shift in the Overton Window, can occur in a number of ways. It can happen through economic disruption, or via the persuasive power of a charismatic individual who prompts the rise of a social movement. (A minor example, perhaps, but at this writing, a 29-year-old freshman congresswoman from New York, Alexandria Ocasio-Cortez, seems to have single-handedly shifted political discourse in the United States.) It can be imposed from above, through the actions of an authoritarian government, or strike like a bolt from the blue via an artistâs vision. Or it can come about through a major political, economic, or environmental disaster. But by whatever process, problematization requires a fundamental shiftâa metanoia, a life-changing alteration in perspectiveâin our intuitions concerning the parameters of our lives.
Such transformations can be quite traumatic, a point that we will explore below. But bad as they can be, it is still worse not to recognize a catastrophe when it has occurred. For the dangers of science and technology do not only lie at some point in the future. Images of frogs and boiling water notwithstanding, itâs possible that the apocalypse has already transpired, and lulled by the trains running on time and the lack of a Death Star, weâve missed the signs. The United States has already elected a reality TV host president, in part through the machinations of artificial intelligence. Entities like Google and Facebook possess data about us that we do not have about ourselves, and maleficent actors use these sites to manipulate our moods and our political beliefs for political and financial gain.
These possibilities worry many, but our behavior remains the same. The problem is that our behavior isnât particularly amenable to argument. Rather, our beliefs and actions are rooted in dim presentimentsâfeeling tones, reallyâthat are the sources of more propositional claims. These feeling tones are not simply given; they are constructed and directed. They are not steered by argument, but by the images and metaphors of our cultural productionsâthe revenge of the âuselessâ arts and humanities.
Much of the following account is devoted to mapping the evolution of these feeling tones. Take one example: perhaps the Ur-image of American culture since the 1970s has been the figure of Dirty Harry,3 the angry, autonomous, and well-armed individual at war with the state. (The political correlate is Ronald Reagan.) This cultural icon redefined our understanding of freedom: limitation has now come to be viewed as an affront. Weâve created a society
Where there is nothing much to believe in, and nothing much to fight for, except the never-ending expansion of personal freedom.
(Hamid 2018)
But this is tacit nihilism, freedom reduced to an instrument for arbitrary ends. Ironically, this also serves the interests of authoritarians, who find that isolated and (despite the firepower) defenseless individuals are easier to manipulate than communities who share a commitment to a common set of values.
This also implies that itâs less likely that opposition will form against todayâs rising sources of power. I do not mean nation-states, which are in long-term decline, but rather the welter of private corporations that are global in reach and armed with the latest technological advances. The power wielded by FAGAM (Facebook, Amazon, Google, Apple, and Microsoft) exceeds that of many governments, reflected in their ability to resist and ignore state control. These are stateless corporations rather than American enterprises: 80% of Facebook revenues now come from outside the United States, and 94% of Appleâs cash reserves ($250 billion) lie in offshore accounts, an amount âgreater than the combined foreign reserves of the British government and the Bank of Englandâ (Dasgupta 2018). Itâs a classic case of misdirection: people are trained to rail against government, while our lives are increasingly governed by corporate monopolies.
But now to my point: behind all this lies science and technology. Not only does technology make such gargantuan companies possible, but it also enables the appropriation of our privacy that poses dangers both public and private. Our phones constantly specify our location, as do our purchases, and we casually give up information concerning our habits in exchange for tiny discounts. Altogether, it is a curious exercise in freedom: technology increases our capacities even as it ensnares us in webs of control.
It wasnât so long ago that âfreedomâ had other connotations. Even in living memory, in the 1940s, freedom not only meant increased capacities but also included the idea of self-rule. Rather than the isolated individual confronting massive public and private entities, we participated in small and medium-sized organizationsârunning and frequenting local businesses, joining social organizations and bowling leagues. In such circumstances it is obvious that we must restrain our prerogatives in order to share a life with others.
If this commonplace is rarely noted today, perhaps it has something to do with the prejudices of academics, who supply much of our public commentary. Itâs within the academy that we see the full flowering of todayâs libertarian ethic. This is especially true in the humanities: a philosophy department consists of an aggregate of individuals with little sense of s...