Transhumanism, Nature, and the Ends of Science
eBook - ePub

Transhumanism, Nature, and the Ends of Science

A Critique of Technoscience

  1. 160 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Transhumanism, Nature, and the Ends of Science

A Critique of Technoscience

About this book

This book offers a social, political, and aesthetic critique of transhumanism and of the accelerating growth of scientific knowledge generally. Rather than improving our lives, science and technology today increasingly leave us debilitated and infantilized. It is time to restrain the runaway ambitions of technoscientific knowledge.

The transhumanist goal of human enhancement encapsulates a range of dangerous social pathologies. Like transhumanism itself, these pathologies are rooted in, or in reaction to, the ethos of 'more'. It's a cultural love affair with excess, which is prompted by the libertarian standards of our cultural productions. But the attempt to live at the speed of an electron is destined for failure.

In response, the author offers a naturalistic account of human flourishing where we attend to the natural rhythms of life. The interdisciplinary orientation of Transhumanism, Nature, and the Ends of Science makes it relevant to scholars and students across a wide range of disciplines, including social and political philosophy, philosophy of technology, science and technology studies, environmental studies, and public policy.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Transhumanism, Nature, and the Ends of Science by Robert Frodeman in PDF and/or ePUB format, as well as other popular books in Philosophy & Humanism in Philosophy. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2019
Print ISBN
9781032092263
eBook ISBN
9780429581267

1 The Tool of Our Tools

“He treats the world as a game.” IRL (“In Real Life”) Streamers broadcast their daily lives—all parts, good and bad, exceptional and mundane. Some have hundreds of thousands of followers. A New Yorker profile1 describes one prominent streamer. Armed with a smartphone and a selfie stick, he walks into a restaurant chosen at random. Soon his viewers are “swatting,” calling the restaurant with reports claiming that he’s a child molester or a terrorist with a bomb in his backpack. The nervous manager asks him to leave. Viewers then flood the restaurant’s Yelp reviews with low ratings. Streamer and audience move on to their next amusement.

1

Times certainly have changed. Behavior that once would have resulted in shunning or arrest has now become common. Of course, some of these changes are salutary; some not. The point, however, is the ways in which science and technology make these decisions for us. How have we arrived at this point? These pages trace this story.
This requires a dive into philosophy. Our social conditions today are in many ways unique, and the power of our technologies is unprecedented. It’s a brave new world out there. Nonetheless, our circumstances have been mapped by dead philosophers. Hegel, for instance: he understood that there is a rhythm to events, that innovations cause rebound effects, and advances provoke their opposite. We are empowered by our technologies, but they also leave us debilitated. We are both aroused and overwhelmed by our inventions; our devices both augment and abolish our freedom.
Thoughtful people have identified an array of challenges facing society: food security, climate change, pandemics, overpopulation, weapons of mass destruction, collapse of the global financial market. They have labored tirelessly to devise solutions—improved crops, more efficient sources of power, better birth control and the empowerment of women, enhanced scanning of incoming cargo, better monitoring of stock activity. Make no mistake: these efforts have accomplished a great deal of good. But the solutions being offered are overwhelmingly technological in nature. Our passions are thought of as unmanageable; progress is defined by improving our tools rather than ourselves. This raises the danger noted by Thoreau: we may become the tool of our tools.
Transhumanists2 are the most toolish of all. They have grand aspirations for our future. They want to turn our scientific and technological powers back upon ourselves. But in their eagerness they skip over the negative aspects of their program. The reasons vary. Some transhumanists are insulated by talent, money, and status: even if others suffer, they will retain their survivalist mansions and New Zealand passports. For others, the desire is more millennialist: no sacrifice is too great to reach the promised land of the Singularity. And often it’s just too difficult to pay attention to possible dangers when life is so filled with wonderful opportunities.
Transhumanists, and the techno-optimists generally, have missed a crucial point. They haven’t realized that Zuckerberg’s motto “move fast and break things” is a pleonasm.

2

Whether or not they are transhumanists, our most prominent scientists and engineers regularly promise a new dispensation for humanity—longer life and heightened skills and pleasures. But listen again, and you can hear rumblings of unease. They emphasize the coming marvels, but when pressed they’ll also grant that technological advance might just snuff out the human race. Elon Musk and Steven Hawking warn of the dangers of artificial intelligence (AI), even while pushing things forward; James Barrat ponders whether AI will be our final invention. Others are troubled by advances in nanotechnology and genetic enhancement, or worry about do-it-yourself (DIY) microbiologists creating monsters in basement labs.
We will return to the IRL trolls and the DIY biohackers who inject themselves with their own genetic concoctions. For now, let’s focus on the mainstream voices, people like Gates and Hawking. Their views repeat the concerns once expressed by Bill Joy—but without drawing Joy’s conclusion. Thus Hawking: “we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it” (Osborne 2017). But the fact that “we cannot know” did not lead him to suggest that we should pause in our research. Joy is distinctive in that he followed his thinking to its logical conclusion. Sizing up the risks, he argued that we should “limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge” (Joy 2000).
Joy is well-known in tech circles, and his essay was widely read, but few inside or outside of science have taken his suggestion seriously. In the years since he published his essay the growth of knowledge has accelerated, and the dangers of technological advance have increased. But this hasn’t prompted discussions about slowing the growth of knowledge.
True, one can find a few vague pronouncements. The Future of Life Institute held the Asilomar Conference on Beneficial AI in 2017. They promulgated a set of 23 principles. The results, however, were pretty weak beer: “AI Arms Race: An arms race in lethal autonomous weapons should be avoided.” Well, yes! One finds little that is programmatic and policy-focused—no senator or Washington think tank is arguing that we should freeze AI funding while we assess the risks, or declaring that DIY biology should be illegal. No international conference whose theme is whether it is time to call a halt to the Enlightenment, that sapere aude! has become too dangerous to pursue. These suggestions lie outside the Overton Window. On the contrary, everyone expects things to accelerate.
Not all the possibilities are dire. But even the non-lethal ones can be quite disorienting. Human brain tissue is now grown in dishes from stem cells—“brain organoids.” Some wonder whether these organoids might come to have—or perhaps even already have—conscious experience. Other experiments involve the manufacturing of chimeras, the transplantation of human cells derived from pluripotent stem cells into the brains of mice. This research could lead to life-altering advances for those who suffer from neurological or psychiatric diseases. But it also threatens cultural norms and religious beliefs, and unsettles our sense of what it means to be human. Are we ready for the Patriots’ next running back to have some percentage of gorilla DNA? Transhumanists speak with the wide-eyed fervor of old-time preachers, but their aspirations challenge cultural norms in unprecedented ways.
On rare occasions someone questions the endless production of knowledge. But usually the concern isn’t with technoscientific knowledge at all but with the social sciences and the humanities. These fields are described as useless—meaning that they do not produce stuff. Or they’re described as being positively obstructionist, meaning that they raise questions about the production of more stuff. But these fields are not as radical as all that. These fields also embrace infinity—the ideology of infinite knowledge production, the norm of producing books and articles for a tiny cohort of like-minded specialists. It hasn’t occurred to humanists that their task is fundamentally different from that of the sciences, that they ask questions rather than provide answers, and that the bulk of their work should be tied to awakening an appreciation of perennial issues rather than engaging in the discovery of new specialized truths.
Set the humanities to one side: the progress that people have in mind is technoscientific in nature. Try suggesting that we take a break from this, that a pause in development might give us a chance to catch our collective breath: you will be told that technological development is unstoppable. Even a temporary pause is impossible. The point isn’t really argued; it’s axiomatic. You can’t stop progress. This despite the fact that we have been able to stop technoscientific development when motivated to—thus the Outer Space Treaty, which banned weapons from space. (That was in 1967; in 2018, the Trump administration proposed the creation of a new military branch dedicated to fighting wars in space.) Nor, it seems, can we discuss the possible redefinition of progress. Everything is possible in terms of technology, while nothing is possible in terms of moderating our sensibilities and desires. The world is a bounty of resources open to manipulation, and the transhumanists now tell us, so are our bodies and minds. Improving our character isn’t one of our options.
Hitchcock describes similar limits to conversation in Foreign Correspondent (1940). The movie is set in 1939; the International Peace Party is having a meeting to discuss the looming threat of World War II. Someone explains that the coming war involves circumstances over which we have no control. A member of the Peace Party replies:
Yes, those convenient circumstances over which we have no control. It’s always odd, but they usually bring on a war. You never hear of circumstances over which we’ve no control rushing us into peace, do you?
The determinist argument shuttles between the two poles of “can’t” and “shouldn’t.” Under “can’t,” the pursuit of knowledge is treated as if it is written into our DNA, and the budget of the National Science Foundation constitutes a fourth law of motion. The point is also made in terms of political realities. Passing laws to restrain knowledge production is hopeless. Laws could forbid some types of research, but there will always be researchers and countries who will go rogue. (By this logic, we should also give up on outlawing murder.) At some point, the argument shifts to “shouldn’t.” We have so many problems to solve; it’s not right to stop the pursuit of knowledge. Caught between can’t and shouldn’t, we accept our fate and wait expectantly for the wonders (or disasters) in the offing. In any case, there’s no sense dwelling on negative possibilities if there’s nothing to be done about them anyway.
This view is more than a pose but less than a thought-out conclusion; less a counsel of despair than an unexamined intuition and failure of will. It’s time that we acknowledge that we possess agency here, too. Difficult, yes. Impossible, no. Long-held assumptions need to be challenged—not only of the goodness of more and more knowledge, and inevitability of ever more technology, but other beliefs as well: that knowledge is the sole way to address a problem, that self-rule and continued technological advance are compatible, and that technological convenience is an unambiguous good. This is to problematize issues that have been left for dead. But it is possible to turn our attention toward how to persuade people to be more humane and compassionate rather than simply stronger and smarter and loaded down with toys.

3

Foucault once imagined writing the history of thought in terms of how tacit assumptions become visible:
for a domain of action, a behavior, to enter the field of thought, it is necessary for a certain number of factors to have made it uncertain, to have made it lose its familiarity, or to have provoked a certain number of difficulties around it.
(Rabinow 1998, p. 388)
How is it that the largely laissez-faire production of knowledge is not viewed as a problem, at least potentially? That so few people raise questions about the continued acceleration of knowledge production, particularly in terms of technical know-how? That we hear warnings concerning the dangers of artificial intelligence, but this is not matched with calls to halt research in AI?
“Problematization,” or a shift in the Overton Window, can occur in a number of ways. It can happen through economic disruption, or via the persuasive power of a charismatic individual who prompts the rise of a social movement. (A minor example, perhaps, but at this writing, a 29-year-old freshman congresswoman from New York, Alexandria Ocasio-Cortez, seems to have single-handedly shifted political discourse in the United States.) It can be imposed from above, through the actions of an authoritarian government, or strike like a bolt from the blue via an artist’s vision. Or it can come about through a major political, economic, or environmental disaster. But by whatever process, problematization requires a fundamental shift—a metanoia, a life-changing alteration in perspective—in our intuitions concerning the parameters of our lives.
Such transformations can be quite traumatic, a point that we will explore below. But bad as they can be, it is still worse not to recognize a catastrophe when it has occurred. For the dangers of science and technology do not only lie at some point in the future. Images of frogs and boiling water notwithstanding, it’s possible that the apocalypse has already transpired, and lulled by the trains running on time and the lack of a Death Star, we’ve missed the signs. The United States has already elected a reality TV host president, in part through the machinations of artificial intelligence. Entities like Google and Facebook possess data about us that we do not have about ourselves, and maleficent actors use these sites to manipulate our moods and our political beliefs for political and financial gain.
These possibilities worry many, but our behavior remains the same. The problem is that our behavior isn’t particularly amenable to argument. Rather, our beliefs and actions are rooted in dim presentiments—feeling tones, really—that are the sources of more propositional claims. These feeling tones are not simply given; they are constructed and directed. They are not steered by argument, but by the images and metaphors of our cultural productions—the revenge of the “useless” arts and humanities.
Much of the following account is devoted to mapping the evolution of these feeling tones. Take one example: perhaps the Ur-image of American culture since the 1970s has been the figure of Dirty Harry,3 the angry, autonomous, and well-armed individual at war with the state. (The political correlate is Ronald Reagan.) This cultural icon redefined our understanding of freedom: limitation has now come to be viewed as an affront. We’ve created a society
Where there is nothing much to believe in, and nothing much to fight for, except the never-ending expansion of personal freedom.
(Hamid 2018)
But this is tacit nihilism, freedom reduced to an instrument for arbitrary ends. Ironically, this also serves the interests of authoritarians, who find that isolated and (despite the firepower) defenseless individuals are easier to manipulate than communities who share a commitment to a common set of values.
This also implies that it’s less likely that opposition will form against today’s rising sources of power. I do not mean nation-states, which are in long-term decline, but rather the welter of private corporations that are global in reach and armed with the latest technological advances. The power wielded by FAGAM (Facebook, Amazon, Google, Apple, and Microsoft) exceeds that of many governments, reflected in their ability to resist and ignore state control. These are stateless corporations rather than American enterprises: 80% of Facebook revenues now come from outside the United States, and 94% of Apple’s cash reserves ($250 billion) lie in offshore accounts, an amount “greater than the combined foreign reserves of the British government and the Bank of England” (Dasgupta 2018). It’s a classic case of misdirection: people are trained to rail against government, while our lives are increasingly governed by corporate monopolies.
But now to my point: behind all this lies science and technology. Not only does technology make such gargantuan companies possible, but it also enables the appropriation of our privacy that poses dangers both public and private. Our phones constantly specify our location, as do our purchases, and we casually give up information concerning our habits in exchange for tiny discounts. Altogether, it is a curious exercise in freedom: technology increases our capacities even as it ensnares us in webs of control.
It wasn’t so long ago that “freedom” had other connotations. Even in living memory, in the 1940s, freedom not only meant increased capacities but also included the idea of self-rule. Rather than the isolated individual confronting massive public and private entities, we participated in small and medium-sized organizations—running and frequenting local businesses, joining social organizations and bowling leagues. In such circumstances it is obvious that we must restrain our prerogatives in order to share a life with others.
If this commonplace is rarely noted today, perhaps it has something to do with the prejudices of academics, who supply much of our public commentary. It’s within the academy that we see the full flowering of today’s libertarian ethic. This is especially true in the humanities: a philosophy department consists of an aggregate of individuals with little sense of s...

Table of contents

  1. Cover
  2. Half Title
  3. Series
  4. Title
  5. Copyright
  6. Dedication
  7. Contents
  8. Preface
  9. Acknowledgments
  10. The Bones of the Argument
  11. 1 The Tool of Our Tools
  12. 2 Beyond the Human Condition
  13. 3 Life in the Transition
  14. Excursus I The Practice of Philosophy in the 21st Century
  15. Excursus II Philosophy, Rhetoric, Policy
  16. Index