The Digital and the Real World
eBook - ePub

The Digital and the Real World

Computational Foundations of Mathematics, Science, Technology, and Philosophy

Klaus Mainzer

Share book
  1. 472 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Digital and the Real World

Computational Foundations of Mathematics, Science, Technology, and Philosophy

Klaus Mainzer

Book details
Book preview
Table of contents
Citations

About This Book

-->

In the 21st century, digitalization is a global challenge of mankind. Even for the public, it is obvious that our world is increasingly dominated by powerful algorithms and big data. But, how computable is our world? Some people believe that successful problem solving in science, technology, and economies only depends on fast algorithms and data mining. Chances and risks are often not understood, because the foundations of algorithms and information systems are not studied rigorously. Actually, they are deeply rooted in logics, mathematics, computer science and philosophy.

Therefore, this book studies the foundations of mathematics, computer science, and philosophy, in order to guarantee security and reliability of the knowledge by constructive proofs, proof mining and program extraction. We start with the basics of computability theory, proof theory, and information theory. In a second step, we introduce new concepts of information and computing systems, in order to overcome the gap between the digital world of logical programming and the analog world of real computing in mathematics and science. The book also considers consequences for digital and analog physics, computational neuroscience, financial mathematics, and the Internet of Things (IoT).

-->

Errata(s)
Errata (183 KB)

Contents:

  • Introduction
  • Basics of Computability
  • Hierarchies of Computability
  • Constructive Proof Theory
  • Computational Mathematics and Digital Information Systems
  • Intuitionistic Mathematics and Human Creativity
  • Proof Mining bridging Logic, Mathematics, and Computer Science
  • Reverse Mathematics Bridging Logic, Mathematics, and Computer Science
  • From Intuitionistic to Homotopy Type Theory — Bridging Logic, Mathematics, and Computer Science
  • Real Computability and Real Analysis
  • Complexity Theory of Real Computing
  • Real Computing and Neural Networks
  • Complexity of Algorithmic Information
  • Complexity of Information Dynamics
  • Digital and Real Physics
  • Digital and Real Computing in the Social World
  • Philosophical Outlook

-->
--> Readership: Undergraduate and graduate students, scientists and readers who are interested in foundational, interdisciplinary, and philosophical questions of mathematics, computer science, and science in general. -->
Keywords:Computability;Complexity;Constructive Mathematics;Proof Mining;Real Computing;Analog Networks;Information System;Digital PhysicsReview: Key Features:

  • Compact introduction into the foundations of modern mathematics and computer science
  • Bridging the gap between digital, real and analog computing by new concepts of information systems
  • Consequences in natural and social sciences with respect to scientific computing

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Digital and the Real World an online PDF/ePUB?
Yes, you can access The Digital and the Real World by Klaus Mainzer in PDF and/or ePUB format, as well as other popular books in Matemáticas & Lógica en matemáticas. We have over one million books available in our catalogue for you to explore.

Information

Publisher
WSPC
Year
2017
ISBN
9789813225503

Chapter 1

Introduction

Historically, the theory of algorithms and computability started in the beginning of the last century with foundational debates on logics, mathematics, and philosophy. Therefore, the digital world has its origins in logical, mathematical and philosophical theories.
Alan Turing anticipated the development of the digital general purpose computer by his famous Turing machine (Turing, 1936). Here, we have a finite state control device with a read/write head and a two-way unlimited tape consisting of an unlimited number of cells. The control device is regulated by a program which is a finite set of instructions with states of the machine, operations for printing 0, 1 and (blank) into the cells, moving left or right one cell, and stopping the tape.
A number-theoretic function is defined to be computable if it is the input–output function of a Turing machine. Further on, a set of natural numbers is decidable if there is an effective procedure (characteristic function) that given any element of the set will decide in a finite number of steps whether or not the element is in the set. In short, for decidability of a set, its characteristic function must be Turing computable.
All number-theoretic propositions can be coded with the digits 0 and 1. Thus, Turing’s theory of computability in computer science is reduced to the digital paradigm. The Turing machine is mathematically equivalent to many other effective procedures (e.g., register machines, recursive number-theoretic functions). Therefore, Church’s thesis demands that Turing-computability means (digital) computability in general.
But, there seems to be a deep gap between digital computer science and mathematics. Many mathematical disciplines like the calculus heavily depend on the continuous nature of real numbers. In (classical) physics, the dynamics of nature is modeled by continuous curves and differential equations. Their solutions correspond to physical events which can be predicted or explained as causal effects under certain constraints. At least in classical physics, nature is assumed to be continuous, and even quantum physics uses real and complex-valued quantities of fundamental laws of continuous differential equations (e.g., Schrödinger equation).
Our perceptions of the world seem to be continuous: Our bodily sensors receive analog signals of, e.g., electromagnetic and acoustic waves. Behind all that, there is an old and deep conflict in philosophy between the continuous and discrete characteristics of nature since the Antiqutity. Is matter continuous (Aristotle) or atomic (Democritus)? Nowadays, in technology, we distinguish bewteen digital (discrete) and analog (continuous) signal processing.
Before the digital paradigm of computer science started with the Turing machine, there was already an old tradition of real algorithms in mathematics. Since antiquity, undecidability problems of geometrical constructions (e.g., trisection of an angle) were discussed (Mainzer, 1980). The final answer was found in the beginning of the 19th century by Galois’s result on the non-solvability by radicals of polynomial equations of degree 5 or more. These algorithms manipulate real numbers. There is a long standing tradition of decidability results in algebra and analysis leading to modern numerical analysis. We remind the reader of Newton’s method for finding (approximate) zeros of polynomials in one variable. Newton’s method was one of the first search algorithms in numerical analysis and scientific computing.
Nowadays, practical algorithms to solve equations of physics, models of climate, weather prediction, or financial mathematics mainly refer to numerical analysis. Therefore, we not only need logical–mathematical foundations of digital computing (Turing, 1936) but also foundational studies of real algorithms. They have a long tradition in mathematics from Newton, Gauss, Euler, etc., to modern numerical analysis and scientific computing.
What do real computability and decidability mean? It is amazing that, from a logical point of view, the digital world of natural numbers seems to be much more complicated than the continuous world of real analysis, although counting digits is easily done by children and the naturals numbers are a subset of the real numbers: But, according to Gödel’s famous proof, arithmetic is incomplete, and a closed real field is decidable.
How far can we transfer the results of computability and decidability from digital to real mathematics? These results have consequences for the discrete (digital) and the analog (continuous and real) and with that to the modern analog world of sensor technology and human experience by analog sensors.
Computability of algorithms in computer science is also deeply linked with the provability of theorems in mathematics. Since antiquity, truth of mathematical theorems was verified by logical proofs in axiomatic systems. Logical-axiomatic proofs became the paradigm in science and philosophy (Hilbert, 1918). In the 20th century, the question arose whether the true propositions of a theory can be described completely, correctly, and consistently in formal systems. Famous logicians, mathematicians, and philosophers of the 20th century (e.g., Hilbert, Gödel, and Turing) demonstrated the possibilities and limitations of formalization.
It is well known that mathematical proofs sometimes only guarantee the existence of a solution without providing an effective algorithm and constructive problem-solving procedure. The reason is that mathematics as it is normally practised is based on the law of excluded middle: Either a statement is true or its negation is true. According to this law, it is sufficient to show indirectly that the assumption of non-existence of a solution is false (Mainzer, 1970). But, for practical applications, it is, of course, a great disadvantage that indirect proofs do not tell us how to find a constructive solution step by step (Bishop, 1967).
From a digital point of view, constructive proofs should be realized by digital systems. If mathematics is restricted to computable functions of natural numbers, Turing machines will do the job. But, in higher mathematics and physics (e.g., functional analysis), we have to consider functionals and spaces of higher types. Therefore, a general concept of a digital information system is necessary to compute finite approximations of functionals of higher types. Contrary to the machine orientation of computational mathematics, intuitionistic mathematics (INT) is rooted in the philosophy of human creativity. Mathematics is understood as a human activity of constructing and proving step by step. In the rigorous sense of Brouwer’s intuitionism, we even get a concept of real continuum and infinity which differs from ordinary understanding of classical mathematics. In the foundational research programs of proof mining and reverse mathematics, we offer degrees of constructivity and provability to classify problem solving and proving for different mathematical applications.
Proof mining has the aim to obtain the missing information in an incomplete theorem and to extract effective procedures by a purely logical analysis of mathematical proofs and principles (Kohlenbach, 2008). Sometimes, it is even sufficient to find bounds for search processes of problem solving. At least, proof mining tells us how far away a proof is from being constructive. The research program of proof mining goes back to Georg Kreisel’s proof-theoretic research. He illustrates the extraction of effective procedures from proofs as “unwinding proofs” (Feferman, 1996).
From a theoretical point of view, proof mining is an important link between logic, mathematics, and computer science. Logic is no longer only a formal activity besides and separated from mathematics. Actually, metatheorems of proof mining deliver logical tools to solve mathematical problems more effectively and to obtain new mathematical information: Proofs are more than verification of theorems!
From a practical point of view, proof theory also has practical consequences in applied computer science. In this case, instead of formal theories and proofs, we consider formal models and computer programs of processes, e.g., in industry. Formal derivations of formula correspond to, e.g., steps of industrial production. In order to avoid additional costs of mistakes and biases, we should test and prove that the computer programs are correct and sound before applying them. Automated theorem proving is applied to integrated designs and verification. Software and hardware designs must be verified to prevent costly and life-threatening errors. Dangerous examples in the past were flaws in the microchips of space rockets and medical equipments. Network security of the Internet is a challenge for banks, governments, and commercial organizations. In the age of Big Data and increasing complexity of our living conditions, rigorous proofs and tests are urgently demanded to avoid a dangerous future with overwhelming computational power running out of control.
But, in real mathematics, proofs cannot all be reduced to effective procedures. Actually, there are degrees of constructivity, computability, and provability. How strong must a theory be in order to prove a certain theorem? Therefore, we try to determine which axioms are required to prove a theorem. How constructive and computational are these assumptions?
Instead of going forward from axioms to theorems in usual proofs, we prove in the reverse way (backward) from a given theorem to the assumed axioms. Therefore, this research program is called reverse mathematics (Friedman, 1975). If an axiom system S proves a theorem T and theorem T together with an axiom system S′ (the reversal) prove axiom system S, then S is called equivalent to theorem T over S′. In order to determine the degrees of constructivity and computability, one tries to characterize mathematical theorems by equivalent subsystems of arithmetic. The formulas of these arithmetic subsystems can be distinguished by different degrees of complexity.
In (second-order) arithmetic, all objects are represented as natural numbers or sets of natural numbers. Therefore, proofs about real numbers must use Cauchy sequences of rational numbers which can be represented as sets of natural numbers. In reverse mathematics, theorems and principles of real mathematics are characterized by equivalent subsystems of arithmetic with different degrees of constructivity and computability (Ishihara, 2005). Some of them are computable in Turing’s sense, but others are definitely not and need stronger tools beyond Turing computability. Therefore, it is proved that only parts of real mathematics can be reduced to the digital paradigm. Obviously, reverse mathematics and proof mining are important research programs to clarify and deepen the connections between mathematics, computer science, and logic in a rigorous way.
Another important bridge between logic, mathematics, and computer science is type theory. Types of data (resp., terms) are distinguished in computer programs of computer science as well as in formal systems of mathematics, in order to avoid software bugs (resp., logical paradoxes). Martin-Löf’s intuitionistic type theory offers both a philosophical foundation of constructive mathematics as well as a proof assistant in computer science (e.g., Coq). Recently, homotopy type theory extends formalization from set-theoretical objects up to categories related to more and more large cardinals. Homotopy type theory (HoTT) tried to develop a universal (“univalent”) foundation of mathematics as well as computer languages with respect to proof assistants for advanced mathematical proofs. The question arises how far univalent foundations of mathematics can be constructive.
Philosophically, constructive foundations of mathematics are deeply rooted in the epistemic tradition of efficient reasoning with the principle of parsimony in explanations, proofs, and theories. It was the medieval logician and philosopher William of Ockham (1285–1347) who first demanded that one should prefer explanations and proofs with the fewest number of assumed abstract concepts and principles. The reason is that, according to Ockham, universals (e.g., mathematical sets) are only abstractions from individuals (the elements of a set) without real existence. Ockham’s principle of parsimony later became popular as “Ockham’s razor” reducing abstract principles as far as possible.
The question arises how far reduction is possible without loss of essential information. In order to analyze real computation in mathematics, we need an extension of digital computability beyond Turing computability. Real computing machines can be considered idealized analog computers accepting real numbers as given infinite entities (Blum et al., 1989). They are not only theoretically interesting but also practically inspiring with respect to their applications in scientific computing and technology (e.g., sensor technology).
These aspects are also exciting for computational neuroscience and cognitive science. Appropriate neural networks or cellar automata are computationally equivalent to Turing machines. Further on, we can distinguish a hierarchy of automata and machines which can realize formal languages with increasing complexity. Examples are finite automata accepting regular languages or Turing machines accepting Chomsky grammars of natural languages. Actually, (recurrent) neural networks with rational numbers as synaptic weights are computationally equivalent to Turing machines accepting recursive languages like Chomsky grammars. It can be proven that (recurrent) neural networks with non-computable real weights can even operate on non-recursive languages (Siegelmann, 1994). Human brains also operate on non-recursive natural languages. If analog neural networks are considered as models of human brains, they seem to be more powerful than digital Turing machines.
The foundational debate on the digital and analog, discrete and continuous also has deep consequences for physics. In classical physics, the real numbers are assumed to correspond to continuous states of physical reality. For example, electromagnetic or gravitational fields are mathematically modeled by continuous manifolds. Thus, “real computing” (i.e., computing with real numbers) seems to model effective procedures in a continuous reality. Fluid dynamics illustrates this paradigm with continuous differential equations.
But, in modern quantum physics, a “coarse grained” reality seems to be more appropriate. Quantum systems are characterized by discrete quantum states which can be defined as quantum bits. Instead of classical information systems, following the digital concept of a Turing machine, quantum information systems with quantum algorithms and quantum bits open new avenues to digital physics. Information and information systems are no longer fundamental categories only of information theory, computer science, and logic but also of physics (Mainzer, 2016b).
But, is it possible to reduce the real world to a quantum computer as an extended concept of a universal quantum Turing machine? “It from bit” proclaimed physicist John A. Wheeler (1990). On the other side, fundamental symmetries of physics (e.g., Lorentz symmetry and electroweak symmetry) are continuous (Audretsch and Mainzer, 1996; Mainzer, 2005). Einstein’s space–time is also continuous. Are they only abstractions (in the sense of Ockham) and approximations to a discrete reality?
Some authors proclaim a discrete cosmic beginning with an initial quantum system (e.g., quantum vacuum) evolving in an expanding macroscopic universe with increasing complexity which can be approximately modeled by continuous mathematics. In this case, physical reality is discrete and continuous mathematics with real numbers only a (ingenious) human invention. But physical laws including quantum theory (e.g., Hilbert space) are infused with real numbers and the mathematics of the continuum. Thus, from an extreme Platonic point of view, we could also assume a layer much deeper than the discrete quantum world — the universe of mathematical structures themselves as primary reality with infinity and continuity.
Anyway, besides all ontological speculations, mathematics is fundamental for science and cannot be reduced to the digits and the discrete nature of computer science. We should distinguish degrees of constructivity, computability, and provability in the rigorous sense of real computing, proof mining, and reverse and univalent mathematics.
In this case, Stephen Wolfram’s vague concept of computational equivalence can be made precise and corrected (Wolfram, 2002). He demanded that the natural processes of atomic, molecular, and cellular systems should be considered as “computationally equivalent” procedures of Turing machines, cellular automata, and neural networks (in the sense of a physically extended Church’s thesis). This idea already came up with John von Neumann’s and Konrad Zuse’s cellular automata (Zuse, 1969). At least, the mathematical models and theories of these natural systems can be distinguished by different degrees of complexity. Therefore, we need the corresponding mathematical theories and models with their equations and laws in order to derive reliable predictions, classifications, and explanations. Quasi-empirical computer experiments in the sense of Wolfram, even with powerful computers and algorithms, are not sufficient.
The exponentially increasing power of supercomputers and global networks is overwhelming. Success and efficiency in science and economy seem to only depend on fast algorithms and huge databases. In economy, they rapidly predict future trends and profiles of products and customers. In science, they recognize correlations and patterns in huge collections of data (e.g., elementary particle colliders in high energy physics, machine learning algorithms in molecular biology). Science (like economy) seems to be more and more driven by fast algorithms and a huge amount of data: Big DataThe end of theory? asked Chris Anderson, an influential publisher of digital media.
Wolfram ev...

Table of contents