The Self-Assembling Brain
eBook - ePub

The Self-Assembling Brain

How Neural Networks Grow Smarter

Peter Robin Hiesinger

Buch teilen
  1. 296 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfügbar
eBook - ePub

The Self-Assembling Brain

How Neural Networks Grow Smarter

Peter Robin Hiesinger

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

What neurobiology and artificial intelligence tell us about how the brain builds itself How does a neural network become a brain? While neurobiologists investigate how nature accomplishes this feat, computer scientists interested in artificial intelligence strive to achieve this through technology. The Self-Assembling Brain tells the stories of both fields, exploring the historical and modern approaches taken by the scientists pursuing answers to the quandary: What information is necessary to make an intelligent neural network?As Peter Robin Hiesinger argues, "the information problem" underlies both fields, motivating the questions driving forward the frontiers of research. How does genetic information unfold during the years-long process of human brain development—and is there a quicker path to creating human-level artificial intelligence? Is the biological brain just messy hardware, which scientists can improve upon by running learning algorithms on computers? Can AI bypass the evolutionary programming of "grown" networks? Through a series of fictional discussions between researchers across disciplines, complemented by in-depth seminars, Hiesinger explores these tightly linked questions, highlighting the challenges facing scientists, their different disciplinary perspectives and approaches, as well as the common ground shared by those interested in the development of biological brains and AI systems. In the end, Hiesinger contends that the information content of biological and artificial neural networks must unfold in an algorithmic process requiring time and energy. There is no genome and no blueprint that depicts the final product. The self-assembling brain knows no shortcuts.Written for readers interested in advances in neuroscience and artificial intelligence, The Self-Assembling Brain looks at how neural networks grow smarter.

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist The Self-Assembling Brain als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu The Self-Assembling Brain von Peter Robin Hiesinger im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Biological Sciences & Neuroscience. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

1

Algorithmic Growth

1.1

Information? What Information?

The Second Discussion: On Complexity

AKI (THE ROBOTICS ENGINEER): Okay, this was weird. Not sure it adds up. Let’s just start with one problem: the neat vs scruffy discussion was really about the culture of people working in AI in the ’70s and ’80s. It’s not a scientific concept. These words are not properly defined, and their meaning changed with time. I know them to describe Minsky vs McCarthy, the scruffy ‘hack’ versus formal logic—but both worked on symbol processing AI, not neural nets. Only later did people pitch symbol processing vs neural nets using the same words—how does that make sense? It was also too much biology for me. Half of it was the history of this guy Sperry.
ALFRED (THE NEUROSCIENTIST): Well, before ‘this guy Sperry’ people apparently thought development only produced randomly wired networks, and all the information enters through learning. That’s huge. Sperry marks a transition, a step artificial neural networks apparently never made! I liked the idea of the shared problem. It’s interesting that the early computer people thought it just had to be random because of the information problem of real brains. And even Golgi thought there must be a whole network right from the start, the famous ‘reticular theory.’ How does a biologist or an AI programmer today think the information gets into the neural network of the butterfly?
MINDA (THE DEVELOPMENTAL GENETICIST): You need precise genetic information to build it and make it work. I think it is conceptually known how the information got into the network: it’s in the butterfly’s genes. There may be some loss of precision during development, but a neural circuit that works has to be sufficiently precise to do so.
PRAMESH (THE AI RESEARCHER): That sounds like genetic determinism to me. Of course there is precise information in the genes, but that’s not the same as the information that describes the actual network. We need to look at information at different levels. It’s similar in our AI work: We first define a precise network architecture, learning rules, etc., there we have total control. But the network has the ability to learn—we feed it a lot of information, and we may never get to know how it stored and computed that information really. Here you have less control and really don’t want to have more. An unsupervised approach will even find things you never told it to find in the first place.1
ALFRED: I like that. I also liked that Minsky and Rosenblatt both built their networks with random connections, not precise network architecture … and those are the things that worked.…
PRAMESH: Yes, randomness and variability can make outcomes robust, but also unpredictable. In our computational evolution experiments, random processes are required to make evolution flexible and robust, but we can never predict the solution the simulation will find. It’s the same with how an artificial neural network learns. It’s a question of levels: you can have control at a lower level without having control over how the system operates at a higher level. This can actually be true even if a system is completely deterministic, without randomness or variability. Maybe that’s how we should think about genes.
MINDA: As I said before, the genes contain the information to develop the network. In your AI work, you feed additional information for the network to learn. That’s a key difference. A developing biological neural network forms precise connectivity based on nothing but the genetic program. If there is no environmental contribution, how can there be anything in the network that was not previously in the genes?
PRAMESH: Can you predict how a mutation in a gene will play out during development?
MINDA: Of course. We know hundreds, probably thousands, of developmental disease mutations where we know the outcome.
AKI: That’s cheating! You only know the outcome because you’ve seen it. Could you predict the outcome if you had never seen it before?
MINDA: Let me see. It is true that the outcomes of de novo mutations are not as easy to predict, but we can usually gain insight based on previous knowledge. If we know the gene’s function or other mutations, that of course helps to make predictions. We can then test those predictions.
ALFRED: … but then you are again predicting based on comparison with previous outcomes! How much could you really predict if you had nothing but the genome and zero knowledge about outcomes from before? It’s a cool question.
MINDA: No previous data is a bit hypothetical. It would of course be more difficult to predict the effect of a given mutation. The more information I gather, the better I can predict the outcome.
PRAMESH: So, once you have seen an outcome for a given mutation, you can predict it—sometimes with 100% probability. But the point is: a mutation may play out unpredictably during development even if it always leads to the same outcome—with 100% probability.
AKI: Not clear.
PRAMESH: Well, if you run an algorithm or a machine with the same input twice and it produces the exact same output twice, then this is what we call deterministic. But even a deterministic system can function unpredictably, that’s the story of deterministic chaos. It just means there is no other way to find out what a given code produces at some later point in time other than running the full simulation; there is no shortcut to prediction.2
ALFRED: I remember the deterministic chaos stuff from the ’80s and ’90s, … did that ever go anywhere? It always sounded like giving a catchy, fancy name to something that nobody really understood.
AKI: Yeah, as Wonko the Sane said: if we scientists find something we can’t understand we like to call it something nobody else can understand.…3
MINDA: I don’t see where this is supposed to go. It seems to me that if a code always produces precisely the same outcome, then it is also predictable.
PRAMESH: Well, it’s not. Deterministic systems can be what is called mathematically ‘undecidable.’ Do you know cellular automata? People still study them a lot in artificial life research. Conway’s Game of Life is quite famous. Think of a very simple rule set to create a pattern. It turns out that after a few hundred iterations, new patterns emerge that could not have been predicted based on the simpl...

Inhaltsverzeichnis