Consciousness and Robot Sentience
eBook - ePub

Consciousness and Robot Sentience

  1. 264 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Consciousness and Robot Sentience

About this book

THIS BOOK is the fully revised and updated second edition of 'Consciousness and Robot Sentience'. With lots of new material, it will provide new insights into artificial intelligence (AI) and machine consciousness, beyond materials published in the first edition. The organization of this book has been streamlined for better clarity and continuity of the lines of arguments.

The perspective of AI has been added to this edition. It is shown that contemporary AI has a hidden problem, which prevents it from becoming a true intelligent agent. A self-evident solution to this problem is given in this book.

This solution is surprisingly connected with the concepts of qualia, the mind-body problem and consciousness. These are the hard problems of consciousness that so far have been without viable solution. Unfortunately, the solution to the hidden problem of AI cannot be satisfactorily implemented, unless the phenomena of qualia and consciousness are first understood. In this book an explanation of consciousness is presented, one that rejects material and immaterial substances, dualism, panpsychism, emergence and metaphysics. What remains is obvious. This explanation excludes consciousness in digital computers, but allows the artificial creation of consciousness in one natural-like way, by associative non-computational neural networks.

The proof of a theory calls for empirical verification. In this case, the proof could be in the form of a sentient robot. This book describes a step towards this in the form of the author's small experimental robot XCR-1. This robot has evolved through the years, and has now new cognitive abilities, which are described.

Contents:

  • Dedication
  • Preface
  • Artificial Intelligence
  • Sensory Information and Meaning
  • Self-Explanatory Information and Qualia
  • Hypotheses about Consciousness
  • The Explanation of Consciousness
  • The Gateway to Mind; Sensory Perception
  • Memory, Learning, Thinking and Imagination
  • Natural Language and Inner Speech
  • Emotions and Motivation
  • Artificial Neural Networks
  • Thinking and Associative Neural Networks
  • Towards Artificial Cognitive Perception
  • Examples of Perception/Response Feedback Loops
  • Symbols in Perception/Response Feedback Loops
  • Information Integration with Multiple Modules
  • Emotional Significance in Associative Processing
  • The Haikonen Cognitive Architecture (HCA)
  • Mind Reading with HCA
  • A Comparison of Some Cognitive Architectures
  • Testing Artificial Consciousness
  • An Experimental Robot with the HCA
  • Appendix
  • Some Experimental Neural Circuits
  • Bibliography
  • Index


Readership: This book demystifies both the enigmatic philosophical issues of consciousness and the practical engineering issues of conscious robots by presenting them in an easy-to-understand manner for the benefit of students, researchers, philosophers and engineers in the field.Artificial Intelligence;Consciousness;Machine Consciousness;Robots0 Key Features:

  • The proof of a theory calls for empirical verification. In this case, the proof could be in the form of a sentient robot
  • This book describes a step towards this in the form of the author's small experimental robot XCR-1
  • This robot has evolved through the years, and has now new cognitive abilities, which are described

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Consciousness and Robot Sentience by Pentti O Haikonen in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.

Chapter 1

Artificial Intelligence

1.1.AI, Computation and Cognition

Is it possible to make a computer to think? Are computers thinking already? These are old questions. Already in the early days of computers, in the 1950’s, some researchers thought that they had the answers ready. When humans compute, they think. What does the computer do, when it executes exactly the same computations? Humans can also reason non-numerically. What does a computer do, when it is programmed to do the same reasoning exactly in the same way? Wouldn’t it be fair to say then that the computer thinks? The early researchers thought so, and this hypothesis gave rise to the discipline of Artificial Intelligence (AI).
The fundamental assumption behind AI is that both thinking and cognition are computational and symbolic, and therefore can be produced via the execution of algorithms. This view leads to the conclusion that human-like general intelligence can be produced by suitable computer programs, and eventually, the computer should be able to think and reason as well or even better and faster than a human. It would all depend on the extent and ingenuity of the programs.
However, all is not well, and the foundations of AI are not so solid as they are made to appear. AI has a fundamental, embarrassing problem that everybody knows, but nobody wants to talk about. Yet, this problem prevents AI from becoming what it is supposed to be. Also, it turns out that the studying of this problem will reveal an unavoidable connection between intelligence and consciousness; there cannot be any true intelligence without consciousness, as will be pointed out later on.
The fundamental problem of AI was not initially recognized by the AI pioneers, and when it eventually was, it was denied, belittled and played down. Still, after sixty plus years after the first AI programs, AI is haunted by this problem, and the situation is not getting better. In fact, it is getting worse and straightforward dangerous, as more complicated AI programs with autonomous executive powers are being fielded.
How did the fundamental problem of AI arise? Artificial Intelligence was born in mid-1950’s when Herbert Simon and Al Newell produced their first AI program, the “Logic Theorist”. This program was different from all previous computer programs, as it did not do numerical computations. Instead of these, it executed logical reasoning. Now it appeared that computers could be more than mere programmable numeric calculators. Herbert Simon claimed that he and Al Newell had invented a thinking machine, and in doing so they had also solved the mind-body problem that had puzzled the philosophers of the mind for eons. Later on Simon and Newell presented their “Physical Symbol System Hypothesis” (PSSH) that became to be the cornerstone of Artificial Intelligence. According to this hypothesis a rule-based symbol manipulating computer has everything that is necessary for general intelligence. Thus, thinking and intelligence are nothing more than rule-based symbol manipulation, and a suitably programmed computer would eventually be able to execute every mental operation that is executed by the human mind and brain [Newell and Simon 1976].
Simon and Newell were not able to verify experimentally their Physical Symbol System Hypothesis, apparently due to a practical reason, namely the limited processing power and memory capacity of the computers of that era. Instead of a direct proof they proposed that the hypothesis was actually verified by indirect evidence, the fact that there were no known other means and mechanisms for thinking and cognition. If thinking were not rule-based manipulation of symbols, then what else could it be? Nothing else. This is the only way and “there is no other game in town” [Fodor 1975]. This conclusion was accepted at its face value by the forthcoming AI researchers, even though it was based on a logical fallacy; argumentum ad ignorantiam, the ignorance of evidence to the contrary. Unfortunately, the ignorance of any contradicting evidence is not a proof of the non-existence of such evidence, it is a proof of something else. Simon, Newell and Fodor could not think of any other explanation for the processes of thinking, but this ignorance does not constitute any logical proof for their hypothesis.
The digital computer is a physical symbol system, where binary words, strings of zeros and ones, are used as the symbols. Computers are known to work very well, and they are able to perform a wide variety of information processing tasks, also those that apparently call for some kind of intelligence. For instance, computers can successfully play games and drive self-driving cars. No doubt, even more astonishing applications will be seen. So what, if any, is the problem with physical symbol systems?
There is a serious problem, and it will be shown here that the Physical Symbol System Hypothesis is not valid. The brain is not a digital computer, and it is not a physical symbol system even though it is able to think and reason in symbolic ways. There is a fundamental difference between the ways in which information is processed in the brain and the computer, and this difference prevents the creation of computer-based true intelligence. In the following this difference is explained, and it is also explained how the problem can be remedied and how true thinking machines can be designed.

1.2.The Difference between the Brain and the Computer

Complicated calculations involve the serial execution of different mathematical operations and the storage and reuse of intermediate results. First computers were designed for the automatic execution of strings of numeric calculations. They were calculating machines with memories for intermediate results and the type and order of the operations to be executed; the program. In addition to the calculating unit and memory, a special control unit was needed to control the overall operation. Contemporary computers are vastly refined, but the basic principle of the combination of program, calculator, control and memory is still the same. Without programs computers do nothing.
A computer memory consists of addressable memory locations for each piece of data, which is in the form of binary words. The running of a computer program involves data retrieval and storage with the help of memory addresses. This can be demonstrated by a trivial example of a bank account balance computation command:
balance = balance + deposit
This command states that the numeric values from the memory location addresses “balance” and “deposit” must be added together, and the sum must be stored to the memory location address “balance”. Now a computer novice may claim that balance and deposit are not memory addresses, they are names of variables. This is how it looks, and it might also look like the computer would actually understand, what the computation is about. However, balance and deposit are only labels for the actual memory location addresses, which are binary numbers. The numeric values stored at these memory locations are the “variables” that may change. The computer reserves a memory location with an address for each variable whenever the program is to be run.
The labels balance and deposit do not carry any external meaning to the computer, but are helpful for the programmer and anyone trying to figure out what the program does. The stored numeric values of variables do not have any external meaning to the computer, either. The running of a computer program involves the handling of memory location addresses, not external meanings.
The brain has no addressable memory locations, and consequently, no memory address handling and management is required. Instead, the brain operates with phenomenal meanings, produced by the senses and retrieved from memory. Memories are evoked by “mental images”, and in this sense the information itself is also the “memory address”. Information processing and memory function are seamlessly combined in the brain. The flow of mental action is not controlled by any programs, instead it is driven by internal and external conditions and situations.
It should be obvious that the operational principles of the computer and the brain are completely different. The human mind operates with meanings, but where is the meaning in the computations of the computer? This question relates to the so called symbol grounding problem. Meanings cannot be used in computations, if this problem is not solved.

1.3.Meaning in Symbol Systems

Let’s suppose that you are in captivity, sitting inside a windowless room. You have no idea how you got there, and you do not know what is outside. There is a monitor and a calculator in front of you. In order to get food you have to use the calculator to do given computations with numbers that appear on the monitor screen and type the results to the system. Eventually you learn to do this quickly, even though you have no idea what the numbers are about. Then, suddenly the door is kicked open, police officers rush in and you are taken to court and charged with homicide. It turns out that your computations have actually controlled a self-driving car. There has been an accident, and a passenger has died. You try to explain that you are not guilty, because you have not been able to understand what you have been doing, as you have had no way of knowing what the numbers and calculations mean. The prosecutor is not impressed maintaining that all this is irrelevant. Your operations had been successful for a good while, and from the outside it has appeared that you have understood what you have been doing. What else could be required?
Without meanings there can be no understanding. Without understanding there can be no true intelligence. Rule-based computations will not reveal what the numbers mean and are about. Syntactic manipulation of symbols will not lead to semantics, and the meanings of the symbols will not be revealed in this way. American philosopher John Searle tried to point this out with his famous “Chinese Room” thought experiment, where a non-Chinese person inside a room answers written Chinese language questions in written Chinese with the help of rules and look-up tables [Searle 1980]. From the outside it appears that the room or somebody inside the room was able to understand Chinese language symbols, but it is known from the set-up that this is not the case.
Searle explained that computers are kinds of Chinese rooms, operating blindly with rules and symbols without meanings, and therefore are inherently unable to understand anything. This argument did not go well with Strong AI enthusiasts, who maintained in the good tradition of the Physical Symbol System Hypothesis that a suitably programmed computer with proper inputs and outputs will have a mind in the same sense as the humans have. Searle did not accept this, and argued that understanding will not arise in the computer no matter what kinds of rules are programmed, because the external meanings of the symbols are neither accessed nor utilized. In the Chinese room information is processed by blind rules, not by meanings, and the same goes for computers, too.
Searle’s argument is related to the so called symbol grounding problem: How the meanings of symbols can be defined and incorporated in symbol systems [Harnad 1990]. A symbol in itself is only a distinct pattern with no inherent meaning. Words and traffic signs are everyday examples of symbols. If you have not learned what they mean, they are meaningless to you.
The symbol grounding problem is also apparent in the case of dictionaries. A good dictionary defines the meaning of every word with the help of other words. Thus, it would appear that the symbol grounding problem is solved there. This is not the case. When one looks for the meanings of the words that explain the word to be explained, one eventually ends up in a circle, where the explaining words are explained by the word to be explained. For example, Webster’s New World Dictionary from the fifties defines “red” as the color of blood. Ok, but what is “color” then? Webster knows: colors seen by the eye are orange, yellow, red... and you are no wiser.
Mathematics is an other symbol system affected by the symbol grounding problem. Let’s consider a simple example.
Let A = 5B. What is the meaning of B? This can be solved by the rules of algebra and we get B = 0.2A. But did we get the real meaning of B? No. The meanings of A and B remain unsolved, and cannot be revealed simply because they are not there.
It should be evident that mathematical rule-based operations will only tell and reveal something about the relationships between the symbols used in the computations, but nothing about their actual intended meanings. There is no such mathematical operation that could reveal any external meanings, and these meanings, if any, remain only in the mind of the person doing the calculations.
In physics, physical units such as the meter, second and kilogram are carried with the equations. At first sight it might appear that in this way meanings were attached to the calculations. However, this is not the case, and the symbol grounding problem is not solved. The unit markings are just letters and may be used in the algebraic computations in the same way as the other symbols in the equations, in the way of dimensional analysis. No meaning is carried into, or captured by the process of computation or the computing system itself, and the understanding of the external meanings remain for the human supervising the calculations. The very universality and power of mathematics arises from the fact that meanings are omitted. It does not matter what is counted; beans, bananas or money. But, from this universality it also follows that the numbers and calculations alone will not reveal what are being counted.
The lesson here is that in a symbol system the meanings of symbols cannot be ultimately defined by other symbols in the system, nor can they be revealed by any computation. At least some of the meanings must be tied to and imported from the outside world. Therefore, a system that operates with meanings must be able to acquire external information. Humans have senses for that purpose, and nowadays also computers can be fitted with cameras, microphones and any other sensor that an application requires. Thus, it should be technically possible to solve the symbol grounding problem.
However, there is an unfortunate complication. Meanings cannot be imported in the form of symbols, as the imported symbols would only increase the number of symbols to be interpreted. Therefore, the meanings must be imported in a form that requires no interpretation; in self-explanatory forms of sensory information. Symbols are not these.
This requirement leads to another catch: A conventional symbol system is able to handle symbols only, as there is no provision for any other form of expression. Non-symbols cannot be accommodated. A digital computer is able to accept only binary words as its input. Consequently, any analog input, such as audio or vision information must first go through analog-digital conversion. This conversion outputs binary numbers, which are symbols. As such they require interpretation, and no symbol grounding has been achieved. This is a serious problem that leads to the fundamental problem of AI.

1.4.The Fundamental Problem of AI

The fundamental problem of Artificial Intelligence arises from the fact that computers do not operate with external meanings, they operate only with blind rules and naked data. Without meanings there cannot be any understanding, and without understanding there cannot be any true intelligence. Therefore contemporary AI is not true intelligence.
The computer is a symbol system, operating with programmed rules and binary word symbols. The possible external meanings of the symbols are not available without interpretation, and in practice this interpretation is done by the human using the computer.
The external meanings must be imported into a symbol system, and this calls for external information acquisition and suitable sensors. However, there is a problem. The imported meaning cannot be in the form of a symbol, because this action would only increase the number of symbols to be interpreted. Unfortunately a digital computer cannot accept any information in other forms than symbols, and therefore the symbol grounding problem cannot be solved, as symbols cannot be ultimately interpreted by other symbols only. This means that the digital computer cannot operate with meanings in the true sense, and consequently it will not be able to understand what it does. Robots with symbolic AI do not understand what they are doing. They may appear to converse fluently with humans, but in reality they do not know what they are talking about. They do not even know that they exist.
True Artificial Intelligence calls for different kind of information processing machinery. This machinery would be able to perceive itself and its environment, and for the grounding of meaning of symbols it would use sensory information in such self-explanatory forms that do not require interpretation.
These conclusions lead to other questions: Meanings must be imported in self-explanatory forms of information, but what exactly would these never heard of forms be? Has anyone ever seen this kind of information, for that matter? The answer is obvious, and can be found by inspecting the process of sensory information acquisition. It will then also turn out that the issue of self-explanatory information is related to the problem of consciousness.

Chapter 2

Sensory Information and Meaning

2.1.Sensing and Meaning

The human mind acquires all its experience about the environment and the body by the help of various sensory channels, such as vi...

Table of contents

  1. Cover
  2. Halftitle
  3. Series Editors
  4. Title
  5. Copyright
  6. Dedication
  7. Preface
  8. Contents
  9. Chapter 1 Artificial Intelligence
  10. Chapter 2 Sensory Information and Meaning
  11. Chapter 3 Self-Explanatory Information and Qualia
  12. Chapter 4 Hypotheses about Consciousness
  13. Chapter 5 The Explanation of Consciousness
  14. Chapter 6 The Gateway to Mind; Sensory Perception
  15. Chapter 7 Memory, Learning, Thinking and Imagination
  16. Chapter 8 Natural Language and Inner Speech
  17. Chapter 9 Emotions and Motivation
  18. Chapter 10 Artificial Neural Networks
  19. Chapter 11 Thinking and Associative Neural Networks
  20. Chapter 12 Towards Artificial Cognitive Perception
  21. Chapter 13 Examples of Perception/Response Feedback Loops
  22. Chapter 14 Symbols in Perception/Response Feedback Loops
  23. Chapter 15 Information Integration with Multiple Modules
  24. Chapter 16 Emotional Significance in Associative Processing
  25. Chapter 17 The Haikonen Cognitive Architecture (HCA)
  26. Chapter 18 Mind Reading with HCA
  27. Chapter 19 A Comparison of Some Cognitive Architectures
  28. Chapter 20 Testing Artificial Consciousness
  29. Chapter 21 An Experimental Robot with the HCA
  30. APPENDIX. Some Experimental Circuits
  31. Bibliography
  32. Index