Great Philosophical Objections to Artificial Intelligence
eBook - ePub

Great Philosophical Objections to Artificial Intelligence

The History and Legacy of the AI Wars

Eric Dietrich, Chris Fields, John P. Sullins, Bram Van Heuveln, Robin Zebrowski

  1. 312 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfĂŒgbar
eBook - ePub

Great Philosophical Objections to Artificial Intelligence

The History and Legacy of the AI Wars

Eric Dietrich, Chris Fields, John P. Sullins, Bram Van Heuveln, Robin Zebrowski

Angaben zum Buch

Über dieses Buch

Winner of the 2022 CHOICE Outstanding Academic Titles This book surveys and examines the most famous philosophical arguments against building a machine with human-level intelligence. From claims and counter-claims about the ability to implement consciousness, rationality, and meaning, to arguments about cognitive architecture, the book presents a vivid history of the clash between the philosophy and AI. Tellingly, the AI Wars are mostly quiet now. Explaining this crucial fact opens new paths to understanding the current resurgence AI (especially, deep learning AI and robotics), what happens when philosophy meets science, and the role of philosophy in the culture in which it is embedded. Organising the arguments into four core topics - 'Is AI possible', 'Architectures of the Mind', 'Mental Semantics and Mental Symbols' and 'Rationality and Creativity' - this book shows the debate that played out between the philosophers on both sides of the question, and, as well, the debate between philosophers and AI scientists and engineers building AI systems. Up-to-date and forward-looking, the book is packed with fresh insights and supporting material, including: - Accessible introductions to each war, explaining the background behind the main arguments against AI
- Each chapter details what happened in the AI wars, the legacy of the attacks, and what new controversies are on the horizon.
- Extensive bibliography of key readings

HĂ€ufig gestellte Fragen

Wie kann ich mein Abo kĂŒndigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kĂŒndigen“ – ganz einfach. Nachdem du gekĂŒndigt hast, bleibt deine Mitgliedschaft fĂŒr den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich BĂŒcher herunterladen?
Derzeit stehen all unsere auf MobilgerĂ€te reagierenden ePub-BĂŒcher zum Download ĂŒber die App zur VerfĂŒgung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die ĂŒbrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den AboplÀnen?
Mit beiden AboplÀnen erhÀltst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst fĂŒr LehrbĂŒcher, bei dem du fĂŒr weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhĂ€ltst. Mit ĂŒber 1 Million BĂŒchern zu ĂŒber 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
UnterstĂŒtzt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nÀchsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Great Philosophical Objections to Artificial Intelligence als Online-PDF/ePub verfĂŒgbar?
Ja, du hast Zugang zu Great Philosophical Objections to Artificial Intelligence von Eric Dietrich, Chris Fields, John P. Sullins, Bram Van Heuveln, Robin Zebrowski im PDF- und/oder ePub-Format sowie zu anderen beliebten BĂŒchern aus Philosophy & Mind & Body in Philosophy. Aus unserem Katalog stehen dir ĂŒber 1 Million BĂŒcher zur VerfĂŒgung.



Part I

The AI Wars, 1950 to 2000

1Gödel and a Foundational Objection to AI
2How Would We Know If a Computer Was Intelligent? The Turing Test is Not the Answer
3How Computer Science Saved the Mind
4Implementing an Intelligence
5The Strange Case of the Missing Meaning: Can Computers Think About Things?
6What is Relevant to What? The Frame Problem


The first AI conference was held in 1956 at Dartmouth College, and one of the first AI programs, Logic Theorist, was completed that same year. As with all such conferences about a new development, the conference happened years after work on artificial intelligence had started – though before 1956, the field was not called ‘artificial intelligence’ nor was it even regarded as a separate field. However, just a few years later, and continuing for decades, the AI project of building a machine with human-level intelligence was met with a barrage of sophisticated attacks by philosophers.
We can date the first of these attacks to around 1959, when J. R. Lucas, a British philosopher, presented his paper ‘Minds, Machines, and Gödel’ to the Oxford Philosophical Society.1 Taken together, these attacks pointed out what appeared to be limitations to computation, raised problems about machines being able to think about specific things like coffee cups or numbers, and flagged the general problem of how a machine could be conscious and aware. Whether or not a computer could be rational and moral was also questioned. And finally, several issues were raised regarding cognitive architecture. For example, perhaps only something with an architecture like a brain could actually be intelligent, and computer architectures are nothing like brain architectures. Of course, AI researchers and pro-AI philosophers responded. Sometimes they responded by trying to directly refute the philosophical objections, other times they built and implemented computer programs.
We are concerned, in this book, more with the philosophical arguments than the implementations; the latter will be mentioned only when needed. We consider ‘philosophy’, moreover, to be an activity, not just an academic discipline. Many computer scientists and other AI researchers took straightforwardly philosophical positions and made philosophical arguments, often from a position of great naivetĂ© about philosophy. Especially early in the wars, philosophers often responded in kind, from positions of great naivetĂ© about computer science.
Somewhere around the turn of the millennium, the attacks on AI by philosophers abated; the pro-AI side also calmed down. No qualitatively new issues were identified and no new arguments were launched. While the old arguments were sometimes repeated, perhaps with small variations, they had largely lost their urgency. Did AI researchers successfully answer all the philosophers’ objections and allay all their concerns? Far from it. Did AI researchers come to see that the philosophers were right? No. Did anti-AI philosophers come to see that they were wrong? No. Did AI researchers give up their quest? Not at all: research in AI techniques such as machine learning and data mining is robust and thriving, and its practitioners are currently very much in demand. Indeed it is mainly the success of AI in practice that has generated the ethical issues explored in Part II.
It is important to realize that not all philosophers were anti-AI. Many were very supportive and enthusiastic, like Sloman, above. These pro-AI philosophers, as well as AI researchers themselves, pushed back against the anti-AI philosophers. But, as mentioned above, no side succeeded in pushing the other off the field – genuine peace has not emerged. Rather a stalemate has arisen, along with the emergence of a wait-and-see attitude. As will be discussed in Part II, this attitude of wait and see was induced, at least in part, by the emergence on the scene of a major new player, cognitive neuroscience.
Here is a general overview of some of the different forces that together worked to quiet the AI wars.
1In the beginning, there were many proclamations like Sloman’s above. Many on the pro-AI side were positively gushing about how wonderful AI was and was going to be. The final hurdles to understanding human intelligence – a goal sought, arguably, since at least Plato – were falling 
 the end was in sight, true understanding was at hand. And with it, all the good things that would come from having artificial intelligences helping us run the world. The end of war (the bloody kind), the end of disease, famine, and hardship. However, the most important thing accomplished by the first wave of AI, actually, was teaching us how unbelievably complicated the hardware of the brain is, and how unbelievably complicated the processes involved in thinking really are. Basically, AI researchers and their comrades underestimated by several orders of magnitude how hard it was going to be to build a machine with human-level intelligence. As the decades rolled by, this failure of AI to deliver our intelligent, silicon planet-mates struck many anti-AI philosophers as evidence that they, the anti-AI-ers, were right or at least in the right neighbourhood.
2As the overwhelming and completely under-appreciated complexity of human thought was emerging and making everyone re-evaluate positions once thought obvious, AI was quietly progressing. The successes of Google and Facebook, as corporations, are due to these advances. But this kind of progress didn’t seem philosophically problematic – it wasn’t a threat to our metaphysical or our epistemic understandings of ourselves. AI researchers involved in this latter-day progress did not go around saying their machines were conscious, they didn’t even say their machines were intelligent. They merely said their machines were more useful. What’s philosophically objectionable about that? It was not until this kind of second-wave AI was socially ubiquitous that ethical issues about the uses of AI fully came to the fore.
3We have so far referred to the sides in the AI wars as the pro-AI side and the anti-AI side. This is convenient, but unfortunately it gives the impression that each side was coherent and of one mind. This impression is wrong. There were rebellions and robust disagreements between members of the same sides. Philosophers who thought AI was a pipe dream for one reason attempted to refute those who thought it was a pipe dream for another reason. AI defenders defended different types of AI (e.g. classical, rule-based systems versus distributed, parallel systems), and many held quite different views of what would constitute an AI success. The topic of AI seemed to unleash a storm of arguments all going in myriad directions. All this tumult was exhausting.
4The philosophical attacks not only exposed problems in AI and the related fields of cognitive and developmental psychology, and more recently cognitive neuroscience, but also exposed problems in philosophy. Key philosophical concepts having to do with creativity and rationality, with semantics and thinking about objects in the environment, with consciousness, and with the very notion of having a mind at all were deployed to attack AI. But then other philosophers argued that these very notions were themselves open to attack from different directions, the most important being that the major philosophical concepts were not well-defined. Soon it emerged that it may not be possible to define these concepts well enough to use them. It’s hard to win a battle if you are unclear on how to use your weapons. It’s even harder if you aren’t sure what your weapons are.
There is much more to be said about these four. Fortunately, there is a book in which to say it all: this book. Understanding how these four forces played out in the decades leading up to 2000 is the goal of Part I.
The AI wars were a short, but heady time in human history. The details of why the wars went silent reveal a goldmine of information and knowledge about humans and their neuropsychology, AI and computers, philosophy and its strange nature, and the roles of science and technology in our modern culture. Here then is the history of the AI wars.

The First War:
Is AI Even Possible?


Gödel and a Foundational Objection to AI

Chapter Outline

1.Introduction: Advances and Naysayers
2.John Randolph Lucas: AI’s First Naysayer
3.Gödel’s Theorem
4.What is Really Proved in Section 3.
5.An Objection to Gödel’s Incompleteness Theorem
6.Lucas’ Objection Against AI
7.Does Lucas’s Argument Work?

1. Introduction: Advances and Naysayers

Human technological advances have always come with naysayers opposing the advance. Sometimes the opposition raises good points. Splitting the atom was such a case. In hindsight, one can rationally conclude that the proliferation of nuclear weapons now was too high a price for nuclear knowledge then. Sometimes the naysayers, as well-meaning as they are, miss the profound problem for some immediate one, usually because they lack the knowledge future advances will bring. At the turn of the twentieth century, those opposed to the automobile decried the speed and danger of the machines. Speed and danger proved to be real problems, of course, but the profound problem was automobiles dumping the greenhouse gas carbon dioxide into our atmosphere. Today, the typical passenger vehicle dumps into our air around 4.6 metric tons of carbon dioxide per year.2
There are the puzzling cases. In one of the crueller ironies of history, the great Greek physician, Galen, became his own naysayer. Galen (c. 130–200 CE) wrote several important books on medicine, the most influential of which was called On the Usefulness of the Parts of the Body. Galen’s influence was due partly to the fact that he was one of the first experimental physicians, and he constantly urged the physicians who came after him to learn from experience and to focus on knowledge that could cure patients. Unfortunately, Galen’s influence went beyond anything he could have imagined, beyond anything he would have wanted. For approximately fifteen hundred years, until the late seventeenth century, Galen’s books were regarded as sacred texts. Instead of fu...


  1. Cover
  2. Half-Title Page
  3. Series Page
  4. Title Page
  5. Contents
  6. List of Figures
  7. Prologue: The AI Wars and Beyond
  8. Part I The AI Wars, 1950 to 2000
  9. Introduction
  10. The First War: Is AI Even Possible?
  11. 1 Gödel and a Foundational Objection to AI
  12. 2 How Would We Know If a Computer Was Intelligent? The Turing Test is Not the Answer
  13. The Second War: Architectures for Intelligence
  14. 3 How Computer Science Saved the Mind
  15. 4 Implementing an Intelligence
  16. The Third War: Mental Semantics and Mental Symbols
  17. 5 The Strange Case of the Missing Meaning: Can Computers Think About Things?
  18. The Fourth War: Rationality, Relevance, and Creativity
  19. 6 What is Relevant to What? The Frame Problem
  20. Part II Beyond the AI Wars: Issues for Today
  21. Introduction
  22. 7 What about Consciousness?
  23. 8 Ethical Issues Surrounding AI Applications
  24. 9 Could Embodied AIs be Ethical Agents?
  25. Conclusion: Whither the AI Wars?
  26. Notes
  27. Bibliography
  28. Index
  29. Copyright
Zitierstile fĂŒr Great Philosophical Objections to Artificial Intelligence

APA 6 Citation

Dietrich, E., Fields, C., Sullins, J., Heuveln, B. V., & Zebrowski, R. (2021). Great Philosophical Objections to Artificial Intelligence (1st ed.). Bloomsbury Publishing. Retrieved from (Original work published 2021)

Chicago Citation

Dietrich, Eric, Chris Fields, John Sullins, Bram Van Heuveln, and Robin Zebrowski. (2021) 2021. Great Philosophical Objections to Artificial Intelligence. 1st ed. Bloomsbury Publishing.

Harvard Citation

Dietrich, E. et al. (2021) Great Philosophical Objections to Artificial Intelligence. 1st edn. Bloomsbury Publishing. Available at: (Accessed: 15 October 2022).

MLA 7 Citation

Dietrich, Eric et al. Great Philosophical Objections to Artificial Intelligence. 1st ed. Bloomsbury Publishing, 2021. Web. 15 Oct. 2022.