The first AI conference was held in 1956 at Dartmouth College, and one of the first AI programs,
Logic Theorist, was completed that same year. As with all such conferences about a new development, the conference happened years after work on artificial intelligence had started – though before 1956, the field was not called ‘artificial intelligence’ nor was it even regarded as a separate field. However, just a few years later, and continuing for decades, the AI project of building a machine with human-level intelligence was met with a barrage of sophisticated attacks by philosophers.
We can date the first of these attacks to around 1959, when J. R.
Lucas, a British philosopher, presented his paper ‘Minds, Machines, and Gödel’ to the Oxford Philosophical Society.1
Taken together, these attacks pointed out what appeared to be limitations to computation, raised problems about machines being able to think about specific things like coffee cups or numbers, and flagged the general problem of how a machine could be conscious and aware. Whether or not a computer could be rational and moral was also questioned. And finally, several issues were raised regarding cognitive architecture. For example, perhaps only something with an architecture like a brain could actually be intelligent, and computer architectures are nothing like brain architectures. Of course, AI researchers and pro-AI philosophers responded. Sometimes they responded by trying to directly refute the philosophical objections, other times they built and implemented computer programs.
We are concerned, in this book, more with the philosophical arguments than the implementations; the latter will be mentioned only when needed. We consider ‘
philosophy’, moreover, to be an activity, not just an academic discipline. Many computer scientists and other AI researchers took straightforwardly philosophical positions and made philosophical arguments, often from a position of great naiveté about philosophy. Especially early in the wars, philosophers often responded in kind, from positions of great naiveté about computer science.
Somewhere around the turn of the millennium, the attacks on AI by philosophers abated; the pro-AI side also calmed down. No qualitatively new issues were identified and no new arguments were launched. While the old arguments were sometimes repeated, perhaps with small variations, they had largely lost their urgency. Did AI researchers successfully answer all the philosophers’ objections and allay all their concerns? Far from it. Did AI researchers come to see that the philosophers were right? No. Did anti-AI
philosophers come to see that they were wrong? No. Did AI researchers give up their quest? Not at all: research in AI techniques such as machine learning and
data mining is robust and thriving, and its practitioners are currently very much in demand. Indeed it is mainly the success of AI in practice that has generated the ethical issues explored in Part II.
It is important to realize that not all philosophers were anti-AI. Many were very supportive and enthusiastic, like Sloman, above. These pro-AI philosophers, as well as AI researchers themselves, pushed back against the anti-AI philosophers. But, as mentioned above, no side succeeded in pushing the other off the field – genuine peace has not emerged. Rather a stalemate has arisen, along with the emergence of a wait-and-see attitude. As will be discussed in Part II, this attitude of wait and see was induced, at least in part, by the emergence on the scene of a major new player, cognitive neuroscience.
Here is a general overview of some of the different forces that together worked to quiet the AI wars.
1In the beginning, there were many proclamations like Sloman’s above. Many on the pro-AI side were positively gushing about how wonderful AI was and was going to be. The final hurdles to understanding human intelligence – a goal sought, arguably, since at least Plato – were falling … the end was in sight, true understanding was at hand. And with it, all the good things that would come from having artificial intelligences helping us run the world. The end of war (the bloody kind), the end of disease, famine, and hardship. However, the most important thing accomplished by the first wave of AI, actually, was teaching us how unbelievably complicated the hardware of the brain is, and how unbelievably complicated the processes involved in thinking really are. Basically, AI researchers and their comrades underestimated by several orders of magnitude how hard it was going to be to build a machine with human-level intelligence. As the decades rolled by, this failure of AI to deliver our intelligent, silicon planet-mates struck many anti-AI philosophers as evidence that they, the anti-AI-ers, were right or at least in the right neighbourhood.
2As the overwhelming and completely under-appreciated complexity of human thought was emerging and making everyone re-evaluate positions once thought obvious, AI was quietly progressing. The successes of Google and Facebook, as corporations, are due to these advances. But this kind of progress didn’t seem philosophically problematic – it wasn’t a threat to our metaphysical or our epistemic understandings of ourselves. AI researchers involved in this latter-day progress did not go around saying their machines were conscious, they didn’t even say their machines were intelligent. They merely said their machines were more useful. What’s philosophically objectionable about that? It was not until this kind of second-wave AI was socially ubiquitous that ethical issues about the uses of AI fully came to the fore.
3We have so far referred to the sides in the AI wars as the pro-AI side and the anti-AI side. This is convenient, but unfortunately it gives the impression that each side was coherent and of one mind. This impression is wrong. There were rebellions and robust disagreements between members of the same sides. Philosophers who thought AI was a pipe dream for one reason attempted to refute those who thought it was a pipe dream for another reason. AI defenders defended different types of AI (e.g. classical, rule-based systems versus distributed, parallel systems), and many held quite different views of what would constitute an AI success. The topic of AI seemed to unleash a storm of arguments all going in myriad directions. All this tumult was exhausting.
The philosophical attacks not only exposed problems in AI and the related fields of cognitive and developmental psychology, and more recently cognitive neuroscience, but also exposed problems in
philosophy. Key philosophical concepts having to do with creativity and rationality, with semantics and thinking about objects in the environment, with consciousness, and with the very notion of having a mind at all were deployed to attack AI. But then other philosophers argued that these very notions were themselves open to attack from different directions, the most important being that the major philosophical concepts were not well-defined. Soon it emerged that it may not be possible to define these concepts well enough to use them. It’s hard to win a battle if you are unclear on how to use your weapons. It’s even harder if you aren’t sure what your weapons are.
There is much more to be said about these four. Fortunately, there is a book in which to say it all: this book. Understanding how these four forces played out in the decades leading up to 2000 is the goal of Part I.
The AI wars were a short, but heady time in human history. The details of why the wars went silent reveal a goldmine of information and knowledge about humans and their neuropsychology, AI and computers, philosophy and its strange nature, and the roles of science and technology in our modern culture. Here then is the history of the AI wars.
1. Introduction: Advances and Naysayers
Human technological advances have always come with naysayers opposing the advance. Sometimes the opposition raises good points. Splitting the atom was such a case. In hindsight, one can rationally conclude that the proliferation of nuclear weapons now was too high a price for nuclear knowledge then. Sometimes the naysayers, as well-meaning as they are, miss the profound problem for some immediate one, usually because they lack the knowledge future advances will bring. At the turn of the twentieth century, those opposed to the automobile decried the speed and danger of the machines. Speed and danger proved to be real problems, of course, but
the profound problem was automobiles dumping the greenhouse gas carbon dioxide into our atmosphere. Today, the typical passenger vehicle dumps into our air around 4.6 metric tons of carbon dioxide per year.2
There are the puzzling cases. In one of the crueller ironies of history, the great Greek physician, Galen, became his own naysayer. Galen (c. 130–200 CE) wrote several important books on medicine, the most influential of which was called On the Usefulness of the Parts of the Body. Galen’s influence was due partly to the fact that he was one of the first experimental physicians, and he constantly urged the physicians who came after him to learn from experience and to focus on knowledge that could cure patients. Unfortunately, Galen’s influence went beyond anything he could have imagined, beyond anything he would have wanted. For approximately fifteen hundred years, until the late seventeenth century, Galen’s books were regarded as sacred texts. Instead of fu...