Chapter 1
Artificial Intelligence
1.1.AI, Computation and Cognition
Is it possible to make a computer to think? Are computers thinking already? These are old questions. Already in the early days of computers, in the 1950âs, some researchers thought that they had the answers ready. When humans compute, they think. What does the computer do, when it executes exactly the same computations? Humans can also reason non-numerically. What does a computer do, when it is programmed to do the same reasoning exactly in the same way? Wouldnât it be fair to say then that the computer thinks? The early researchers thought so, and this hypothesis gave rise to the discipline of Artificial Intelligence (AI).
The fundamental assumption behind AI is that both thinking and cognition are computational and symbolic, and therefore can be produced via the execution of algorithms. This view leads to the conclusion that human-like general intelligence can be produced by suitable computer programs, and eventually, the computer should be able to think and reason as well or even better and faster than a human. It would all depend on the extent and ingenuity of the programs.
However, all is not well, and the foundations of AI are not so solid as they are made to appear. AI has a fundamental, embarrassing problem that everybody knows, but nobody wants to talk about. Yet, this problem prevents AI from becoming what it is supposed to be. Also, it turns out that the studying of this problem will reveal an unavoidable connection between intelligence and consciousness; there cannot be any true intelligence without consciousness, as will be pointed out later on.
The fundamental problem of AI was not initially recognized by the AI pioneers, and when it eventually was, it was denied, belittled and played down. Still, after sixty plus years after the first AI programs, AI is haunted by this problem, and the situation is not getting better. In fact, it is getting worse and straightforward dangerous, as more complicated AI programs with autonomous executive powers are being fielded.
How did the fundamental problem of AI arise? Artificial Intelligence was born in mid-1950âs when Herbert Simon and Al Newell produced their first AI program, the âLogic Theoristâ. This program was different from all previous computer programs, as it did not do numerical computations. Instead of these, it executed logical reasoning. Now it appeared that computers could be more than mere programmable numeric calculators. Herbert Simon claimed that he and Al Newell had invented a thinking machine, and in doing so they had also solved the mind-body problem that had puzzled the philosophers of the mind for eons. Later on Simon and Newell presented their âPhysical Symbol System Hypothesisâ (PSSH) that became to be the cornerstone of Artificial Intelligence. According to this hypothesis a rule-based symbol manipulating computer has everything that is necessary for general intelligence. Thus, thinking and intelligence are nothing more than rule-based symbol manipulation, and a suitably programmed computer would eventually be able to execute every mental operation that is executed by the human mind and brain [Newell and Simon 1976].
Simon and Newell were not able to verify experimentally their Physical Symbol System Hypothesis, apparently due to a practical reason, namely the limited processing power and memory capacity of the computers of that era. Instead of a direct proof they proposed that the hypothesis was actually verified by indirect evidence, the fact that there were no known other means and mechanisms for thinking and cognition. If thinking were not rule-based manipulation of symbols, then what else could it be? Nothing else. This is the only way and âthere is no other game in townâ [Fodor 1975]. This conclusion was accepted at its face value by the forthcoming AI researchers, even though it was based on a logical fallacy; argumentum ad ignorantiam, the ignorance of evidence to the contrary. Unfortunately, the ignorance of any contradicting evidence is not a proof of the non-existence of such evidence, it is a proof of something else. Simon, Newell and Fodor could not think of any other explanation for the processes of thinking, but this ignorance does not constitute any logical proof for their hypothesis.
The digital computer is a physical symbol system, where binary words, strings of zeros and ones, are used as the symbols. Computers are known to work very well, and they are able to perform a wide variety of information processing tasks, also those that apparently call for some kind of intelligence. For instance, computers can successfully play games and drive self-driving cars. No doubt, even more astonishing applications will be seen. So what, if any, is the problem with physical symbol systems?
There is a serious problem, and it will be shown here that the Physical Symbol System Hypothesis is not valid. The brain is not a digital computer, and it is not a physical symbol system even though it is able to think and reason in symbolic ways. There is a fundamental difference between the ways in which information is processed in the brain and the computer, and this difference prevents the creation of computer-based true intelligence. In the following this difference is explained, and it is also explained how the problem can be remedied and how true thinking machines can be designed.
1.2.The Difference between the Brain and the Computer
Complicated calculations involve the serial execution of different mathematical operations and the storage and reuse of intermediate results. First computers were designed for the automatic execution of strings of numeric calculations. They were calculating machines with memories for intermediate results and the type and order of the operations to be executed; the program. In addition to the calculating unit and memory, a special control unit was needed to control the overall operation. Contemporary computers are vastly refined, but the basic principle of the combination of program, calculator, control and memory is still the same. Without programs computers do nothing.
A computer memory consists of addressable memory locations for each piece of data, which is in the form of binary words. The running of a computer program involves data retrieval and storage with the help of memory addresses. This can be demonstrated by a trivial example of a bank account balance computation command:
balance = balance + deposit
This command states that the numeric values from the memory location addresses âbalanceâ and âdepositâ must be added together, and the sum must be stored to the memory location address âbalanceâ. Now a computer novice may claim that balance and deposit are not memory addresses, they are names of variables. This is how it looks, and it might also look like the computer would actually understand, what the computation is about. However, balance and deposit are only labels for the actual memory location addresses, which are binary numbers. The numeric values stored at these memory locations are the âvariablesâ that may change. The computer reserves a memory location with an address for each variable whenever the program is to be run.
The labels balance and deposit do not carry any external meaning to the computer, but are helpful for the programmer and anyone trying to figure out what the program does. The stored numeric values of variables do not have any external meaning to the computer, either. The running of a computer program involves the handling of memory location addresses, not external meanings.
The brain has no addressable memory locations, and consequently, no memory address handling and management is required. Instead, the brain operates with phenomenal meanings, produced by the senses and retrieved from memory. Memories are evoked by âmental imagesâ, and in this sense the information itself is also the âmemory addressâ. Information processing and memory function are seamlessly combined in the brain. The flow of mental action is not controlled by any programs, instead it is driven by internal and external conditions and situations.
It should be obvious that the operational principles of the computer and the brain are completely different. The human mind operates with meanings, but where is the meaning in the computations of the computer? This question relates to the so called symbol grounding problem. Meanings cannot be used in computations, if this problem is not solved.
1.3.Meaning in Symbol Systems
Letâs suppose that you are in captivity, sitting inside a windowless room. You have no idea how you got there, and you do not know what is outside. There is a monitor and a calculator in front of you. In order to get food you have to use the calculator to do given computations with numbers that appear on the monitor screen and type the results to the system. Eventually you learn to do this quickly, even though you have no idea what the numbers are about. Then, suddenly the door is kicked open, police officers rush in and you are taken to court and charged with homicide. It turns out that your computations have actually controlled a self-driving car. There has been an accident, and a passenger has died. You try to explain that you are not guilty, because you have not been able to understand what you have been doing, as you have had no way of knowing what the numbers and calculations mean. The prosecutor is not impressed maintaining that all this is irrelevant. Your operations had been successful for a good while, and from the outside it has appeared that you have understood what you have been doing. What else could be required?
Without meanings there can be no understanding. Without understanding there can be no true intelligence. Rule-based computations will not reveal what the numbers mean and are about. Syntactic manipulation of symbols will not lead to semantics, and the meanings of the symbols will not be revealed in this way. American philosopher John Searle tried to point this out with his famous âChinese Roomâ thought experiment, where a non-Chinese person inside a room answers written Chinese language questions in written Chinese with the help of rules and look-up tables [Searle 1980]. From the outside it appears that the room or somebody inside the room was able to understand Chinese language symbols, but it is known from the set-up that this is not the case.
Searle explained that computers are kinds of Chinese rooms, operating blindly with rules and symbols without meanings, and therefore are inherently unable to understand anything. This argument did not go well with Strong AI enthusiasts, who maintained in the good tradition of the Physical Symbol System Hypothesis that a suitably programmed computer with proper inputs and outputs will have a mind in the same sense as the humans have. Searle did not accept this, and argued that understanding will not arise in the computer no matter what kinds of rules are programmed, because the external meanings of the symbols are neither accessed nor utilized. In the Chinese room information is processed by blind rules, not by meanings, and the same goes for computers, too.
Searleâs argument is related to the so called symbol grounding problem: How the meanings of symbols can be defined and incorporated in symbol systems [Harnad 1990]. A symbol in itself is only a distinct pattern with no inherent meaning. Words and traffic signs are everyday examples of symbols. If you have not learned what they mean, they are meaningless to you.
The symbol grounding problem is also apparent in the case of dictionaries. A good dictionary defines the meaning of every word with the help of other words. Thus, it would appear that the symbol grounding problem is solved there. This is not the case. When one looks for the meanings of the words that explain the word to be explained, one eventually ends up in a circle, where the explaining words are explained by the word to be explained. For example, Websterâs New World Dictionary from the fifties defines âredâ as the color of blood. Ok, but what is âcolorâ then? Webster knows: colors seen by the eye are orange, yellow, red... and you are no wiser.
Mathematics is an other symbol system affected by the symbol grounding problem. Letâs consider a simple example.
Let A = 5B. What is the meaning of B? This can be solved by the rules of algebra and we get B = 0.2A. But did we get the real meaning of B? No. The meanings of A and B remain unsolved, and cannot be revealed simply because they are not there.
It should be evident that mathematical rule-based operations will only tell and reveal something about the relationships between the symbols used in the computations, but nothing about their actual intended meanings. There is no such mathematical operation that could reveal any external meanings, and these meanings, if any, remain only in the mind of the person doing the calculations.
In physics, physical units such as the meter, second and kilogram are carried with the equations. At first sight it might appear that in this way meanings were attached to the calculations. However, this is not the case, and the symbol grounding problem is not solved. The unit markings are just letters and may be used in the algebraic computations in the same way as the other symbols in the equations, in the way of dimensional analysis. No meaning is carried into, or captured by the process of computation or the computing system itself, and the understanding of the external meanings remain for the human supervising the calculations. The very universality and power of mathematics arises from the fact that meanings are omitted. It does not matter what is counted; beans, bananas or money. But, from this universality it also follows that the numbers and calculations alone will not reveal what are being counted.
The lesson here is that in a symbol system the meanings of symbols cannot be ultimately defined by other symbols in the system, nor can they be revealed by any computation. At least some of the meanings must be tied to and imported from the outside world. Therefore, a system that operates with meanings must be able to acquire external information. Humans have senses for that purpose, and nowadays also computers can be fitted with cameras, microphones and any other sensor that an application requires. Thus, it should be technically possible to solve the symbol grounding problem.
However, there is an unfortunate complication. Meanings cannot be imported in the form of symbols, as the imported symbols would only increase the number of symbols to be interpreted. Therefore, the meanings must be imported in a form that requires no interpretation; in self-explanatory forms of sensory information. Symbols are not these.
This requirement leads to another catch: A conventional symbol system is able to handle symbols only, as there is no provision for any other form of expression. Non-symbols cannot be accommodated. A digital computer is able to accept only binary words as its input. Consequently, any analog input, such as audio or vision information must first go through analog-digital conversion. This conversion outputs binary numbers, which are symbols. As such they require interpretation, and no symbol grounding has been achieved. This is a serious problem that leads to the fundamental problem of AI.
1.4.The Fundamental Problem of AI
The fundamental problem of Artificial Intelligence arises from the fact that computers do not operate with external meanings, they operate only with blind rules and naked data. Without meanings there cannot be any understanding, and without understanding there cannot be any true intelligence. Therefore contemporary AI is not true intelligence.
The computer is a symbol system, operating with programmed rules and binary word symbols. The possible external meanings of the symbols are not available without interpretation, and in practice this interpretation is done by the human using the computer.
The external meanings must be imported into a symbol system, and this calls for external information acquisition and suitable sensors. However, there is a problem. The imported meaning cannot be in the form of a symbol, because this action would only increase the number of symbols to be interpreted. Unfortunately a digital computer cannot accept any information in other forms than symbols, and therefore the symbol grounding problem cannot be solved, as symbols cannot be ultimately interpreted by other symbols only. This means that the digital computer cannot operate with meanings in the true sense, and consequently it will not be able to understand what it does. Robots with symbolic AI do not understand what they are doing. They may appear to converse fluently with humans, but in reality they do not know what they are talking about. They do not even know that they exist.
True Artificial Intelligence calls for different kind of information processing machinery. This machinery would be able to perceive itself and its environment, and for the grounding of meaning of symbols it would use sensory information in such self-explanatory forms that do not require interpretation.
These conclusions lead to other questions: Meanings must be imported in self-explanatory forms of information, but what exactly would these never heard of forms be? Has anyone ever seen this kind of information, for that matter? The answer is obvious, and can be found by inspecting the process of sensory information acquisition. It will then also turn out that the issue of self-explanatory information is related to the problem of consciousness.
Chapter 2
Sensory Information and Meaning
2.1.Sensing and Meaning
The human mind acquires all its experience about the environment and the body by the help of various sensory channels, such as vi...