Chapter 1
OVERVIEW: FROM ARISTOTLE TO THE BITS OF AN INFORMATIONAL MIND
All Things Informational
How can anything, let alone the mind, be informational? At the time of writing (January 2012), it is estimated that more than 300 million people in the world use laptops or personal computers, of which about 50 million use the internet to access virtual worlds where virtual creatures can exist.1 While a scientist may regard this activity in the perspective of how well the technology works, philosophers may perceive it differently, by questioning how the āexistenceā of a virtual object differs from, or is similar to, the existence of a real object in the real world. Some even take an enormous leap by saying that the way virtual objects āexistā in a computer can throw some light on how the mind exists in the brain.2 In contrast, in this book, we intend the computer to take a backseat in such discussions. We do argue, however, that virtual objects can exist as states of neural networks and that such objects can have just as vivid a character as any virtual creature in a virtual world created by an artist/programmer. However, neural network and the cells of a living brain have been in existence even before the advent of the programmed creature, some of these virtual objects are simply called the thoughts of the living organism. They are certainly not put there by a programmer but they arise through the attrition of living or, put in a less tortuous way, by the process of building up experience by living.
We are not the first to suggest that mind is a virtual object which emerges from a neural network.3 But what are virtual objects? What are they made of? The inevitable, but not immediately comprehensible, answer is that a virtual machine is informational. This book is an attempt to unravel and explain this somewhat curious postulate. How can anything be made of information? How reassuring is it to know that our minds might be informational?
Aristotle appears in the title of this chapter because, with unequaled clarity and persuasiveness, he has shaped philosophy and science in a way that has remained influential right up to the present. As he would not have known what is meant by āinformationalā, the reflection of his ideas into modern philosophy of mind might be a factor that makes some philosophers reluctant to consider the possibility of the mind being informational.
It was only 2,500 years later that the so-called information sciences began to encroach scientifically on the human domain of communication. That communication can be assessed in terms of bits and bytes, computers can have memory and computers can even have malfunctions through electronic viruses are facts that are fairly well known to anyone who owns a laptop or a PC. And yet, looking for the āmindā of an informational machine may still appear to be breaking some basic scientific rules. This book attempts, in the humblest possible way, to suggest that, had Aristotle possessed a laptop, he himself might have bridged the gap.
Sadly, scientists and philosophers have somewhat parted company on the validity of an informational approach to the mind. Much heated debate lies in scientific claims that philosophy brings nothing verifiable into our knowledge. Similarly, the philosopher does not like some elements of scientific certainty, seeing it as a form of arrogance. The basis of the informational style of argument is, however, scientific. The fact that it raises philosophical questions of existence and mind heralds a tiny glimpse of hope that philosophy and science may unite in an attempt to make progress. There is a problem of language. Expressions of the theory of information are mathematical and sometimes incomprehensible to those who do not naturally warm to the language of mathematics. This book aims to remove some of these barriers by tracing how some of the principles of information science have come about and how they might be expressed simply.
The cast of characters in this story certainly includes Aristotle but also many others. Information is a young science and, in this first chapter, those who created it make their first appearance and then reappear later in the book. One aim of the book is to show that the true meaning of āthe age of informationā is not, as some will have us believe, that we are a species driven by satellite navigation from computer workstation to digital television, while speaking to our virtual agents on mobile phones. The ambition is to be positive and suggest that the age of information is a future age in which our own minds will be better understood through a common interest among philosophers, scientists and computer experts in all things informational.
What is Information?
Remembering that we live in the āinformation ageā and asking ā what something isā drives the fingers in a rush to an internet search engine. Why not? In the case of āWhat is information?ā a number of definitions can be found. Here are some examples4 :
āA message received and understoodā
āKnowledge acquired through study or experience or instructionā
āA collection of facts from which conclusions may be drawnā
āA numerical measure of the uncertainty of an outcomeā
The many answers are not a sign of differences of opinion among those who attempt a definition, but are more an indication that the word has multiple meanings. Floridi5 calls āinformationā āa notoriously polymorphic phenomenon and a polysemantic conceptā, i.e., a word with many aspects and meanings. Floridi's philosophy analyses the multiplicity of these meanings by starting with a āwell-formedā datum which can have several characteristics: truth/falsehood, environmental (the height of a tree), instructional (being told how to do something) or factual (a state of affairs). Here we advocate for a more coherent analysis of the seeming diversity.
Taking the above list, consider āA message received and understoodā. The mention of a āmessageā implies that there must be an entity who or which (not to exclude machines) wishes to send a message. There must be a recipient of the message who or which does the understanding! But what is it to understand something? How quickly does the first attempt to find a simple definition of information bump into a huge philosophical problem. How are things understood? Here is what British philosopher John Locke (1632-1704), said in the first paragraph of his celebrated three-volume opus, āAn Essay Concerning Human Understandingā6 :
ā¦An inquiry into the understanding, pleasant and useful. ā Since it is the understanding that sets man above the rest of sensible beings, and, gives him all the advantage and dominion which he has over them; it is certainly a subject, even for its nobleness, worth our labour to inquire intoā¦
It is not the intention here to discuss Locke's philosophy, but to begin to appreciate that definitions of information naturally lead to matters of the mind. Indeed, words like āknowledgeā, āexperienceā and āconclusionsā, in the other definitions, imply some mental effort. But the link is vague. An attempt to describe it in an unambiguous way needs to be taken in small steps.
It may seem odd, but a good place to start looking at information as a scientific idea is the most obscure of the above definitions: āa numerical measure of the uncertainty of an outcomeā. This definition refers to ideas which begin with the work in 1948 of a young engineer on the staff of the Bell Research Laboratories in the USA. He is Claude Elwood Shannon, (1916ā2001).
Shannon and Crackly Telephone Lines and Minds
(Chapter 2. Shannon: The Reluctant Hero of the Information Age)
Claude Shannon is mostly associated with providing measurements and formulae that enable engineers to measure the efficiency of transmission media (telegraph lines, radio waves in space, etc.) But what has this to do with the mind? Shannon's efforts to measure information is based on surprise. āThe more one is surprised by a message or an experience, the more is the information gainedā. We shall see that this fact alone allows us to suggest hypotheses as to how mind develops.
It was a quiet and subtle revolution: no headlines, no media presentations and no fuss. But, in 1948, Claude Shannon's formal definition of information made it possible, for the first time, to make information appear to be a utility like water or gas. Amounts of information became measurable in ābitsā as did the quality and capacity of transmission media such as telegraph wires, radio waves or just the air that transmits the pressure waves created by our voices. He made it possible to show why a crackly telephone line transmits less information than a good one.
According to Lord Kelvin, the ability to measure is to have a science7 :
ā¦when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it,ā¦you have scarcely in your thoughts advanced to the state of Science.
Measuring information in bits or bytes (chunks of 8 bits) is second nature now: words like ābroadbandā, āhi-fiā and āmegabytes per secondā are familiar to anyone who owns a computer, a music system or a smart mobile phone. Even those who are not totally sure what such quantities mean are prepared to pay more for more āmegabytes per secondā in their internet connection, knowing that it will enable them to download their movies more swiftly.
In defining a measurable quantity, information, Shannon has also spawned an underlying science called Information Theory. Engineers who design contemporary commodities such as the internet, cellular networks for mobile phones and global positioning systems (GPS) would be lost without this theory. But where does this leave the philosophical difficulties about āunderstandingā? It is clear that Shannon found the idea of āmeaningā difficult, driving him to address deliberately the medium for carrying the meaning and not the actual message. Unless the medium has enough carrier capacity, the meaning will not be conveyed. So he wanted to measure amounts of information as a carrier for meaning for the purpose of transmitting as much of it as possible. Such capacity can be restricted by, for example, low āfidelityā and interference from electrical ānoiseā which are the chief enemies of good communication. He wrote clearly about concentrating on the carrying medium and avoiding the (semantic) issues of the meaning of information in the second paragraph of his 'A Mathematical Theory of Communicationā:
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected, at another point. Frequently, the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.
(Shannon, 1948).8
This statement led to some disagreement among communication engineers, some of whom did want to capture meaning in their system of measurement, and how this can be done is the subject of much of the book. But staying with quantity of information for now, Shannon's genius lies in his realization that no matter what information is being transmitted it can be measured in ābitsā (binary units). A bit is a choice of two values. Morse code is a good example: dit and daat, also called ādotā and ādashā, being the two values. So dit dit dit daat daat daat dit dit dit is decoded by Morse coders as SOS ā Save our Souls. Given enough time and a known code any message can be transmitted. So when that which is being transmitted is music it can be coded into groups of bits which, if transmitted fast enough, can be decoded to produce energy bursts that drive the earphones or the blaster loudspeakers that are now so familiar.
The enemies of efficient transmission of bits in a medium are the limited ābandwidthā of the connection and the amount of ānoiseā it contains. The idea of limited bandwidth is familiar to those who listen to recorded music ā the higher frequencies of sounds and the very low frequencies tend not to come through as clearly as the middle range. This is true not only of sounds but of all media that transmit information and, for instance, applies also to radio waves that travel through free space and to other forms of transmission such as via cables. The bandwidth of a medium is the range of frequencies it can carry. Because particles rush about at random in these media, there is interference with the transmitted signals, called noise. So some of the bits or groups of bits of information traveling in these media can be corrupted by noise or hindered by the lack of bandwidth.
Shannon did one wonderful thing in the face of these deterrents to proper transmission. He developed a formula that allowed the designers of transmission systems to look around for codes that can get the maximum possible transmission of information across a medium despite a given unavoidable amount of noise and with a given limi...