The Self-Assembling Brain
eBook - ePub

The Self-Assembling Brain

How Neural Networks Grow Smarter

Peter Robin Hiesinger

Partager le livre
  1. 296 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

The Self-Assembling Brain

How Neural Networks Grow Smarter

Peter Robin Hiesinger

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

What neurobiology and artificial intelligence tell us about how the brain builds itself How does a neural network become a brain? While neurobiologists investigate how nature accomplishes this feat, computer scientists interested in artificial intelligence strive to achieve this through technology. The Self-Assembling Brain tells the stories of both fields, exploring the historical and modern approaches taken by the scientists pursuing answers to the quandary: What information is necessary to make an intelligent neural network?As Peter Robin Hiesinger argues, "the information problem" underlies both fields, motivating the questions driving forward the frontiers of research. How does genetic information unfold during the years-long process of human brain development—and is there a quicker path to creating human-level artificial intelligence? Is the biological brain just messy hardware, which scientists can improve upon by running learning algorithms on computers? Can AI bypass the evolutionary programming of "grown" networks? Through a series of fictional discussions between researchers across disciplines, complemented by in-depth seminars, Hiesinger explores these tightly linked questions, highlighting the challenges facing scientists, their different disciplinary perspectives and approaches, as well as the common ground shared by those interested in the development of biological brains and AI systems. In the end, Hiesinger contends that the information content of biological and artificial neural networks must unfold in an algorithmic process requiring time and energy. There is no genome and no blueprint that depicts the final product. The self-assembling brain knows no shortcuts.Written for readers interested in advances in neuroscience and artificial intelligence, The Self-Assembling Brain looks at how neural networks grow smarter.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que The Self-Assembling Brain est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  The Self-Assembling Brain par Peter Robin Hiesinger en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Biological Sciences et Neuroscience. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2021
ISBN
9780691215518

1

Algorithmic Growth

1.1

Information? What Information?

The Second Discussion: On Complexity

AKI (THE ROBOTICS ENGINEER): Okay, this was weird. Not sure it adds up. Let’s just start with one problem: the neat vs scruffy discussion was really about the culture of people working in AI in the ’70s and ’80s. It’s not a scientific concept. These words are not properly defined, and their meaning changed with time. I know them to describe Minsky vs McCarthy, the scruffy ‘hack’ versus formal logic—but both worked on symbol processing AI, not neural nets. Only later did people pitch symbol processing vs neural nets using the same words—how does that make sense? It was also too much biology for me. Half of it was the history of this guy Sperry.
ALFRED (THE NEUROSCIENTIST): Well, before ‘this guy Sperry’ people apparently thought development only produced randomly wired networks, and all the information enters through learning. That’s huge. Sperry marks a transition, a step artificial neural networks apparently never made! I liked the idea of the shared problem. It’s interesting that the early computer people thought it just had to be random because of the information problem of real brains. And even Golgi thought there must be a whole network right from the start, the famous ‘reticular theory.’ How does a biologist or an AI programmer today think the information gets into the neural network of the butterfly?
MINDA (THE DEVELOPMENTAL GENETICIST): You need precise genetic information to build it and make it work. I think it is conceptually known how the information got into the network: it’s in the butterfly’s genes. There may be some loss of precision during development, but a neural circuit that works has to be sufficiently precise to do so.
PRAMESH (THE AI RESEARCHER): That sounds like genetic determinism to me. Of course there is precise information in the genes, but that’s not the same as the information that describes the actual network. We need to look at information at different levels. It’s similar in our AI work: We first define a precise network architecture, learning rules, etc., there we have total control. But the network has the ability to learn—we feed it a lot of information, and we may never get to know how it stored and computed that information really. Here you have less control and really don’t want to have more. An unsupervised approach will even find things you never told it to find in the first place.1
ALFRED: I like that. I also liked that Minsky and Rosenblatt both built their networks with random connections, not precise network architecture 
 and those are the things that worked.

PRAMESH: Yes, randomness and variability can make outcomes robust, but also unpredictable. In our computational evolution experiments, random processes are required to make evolution flexible and robust, but we can never predict the solution the simulation will find. It’s the same with how an artificial neural network learns. It’s a question of levels: you can have control at a lower level without having control over how the system operates at a higher level. This can actually be true even if a system is completely deterministic, without randomness or variability. Maybe that’s how we should think about genes.
MINDA: As I said before, the genes contain the information to develop the network. In your AI work, you feed additional information for the network to learn. That’s a key difference. A developing biological neural network forms precise connectivity based on nothing but the genetic program. If there is no environmental contribution, how can there be anything in the network that was not previously in the genes?
PRAMESH: Can you predict how a mutation in a gene will play out during development?
MINDA: Of course. We know hundreds, probably thousands, of developmental disease mutations where we know the outcome.
AKI: That’s cheating! You only know the outcome because you’ve seen it. Could you predict the outcome if you had never seen it before?
MINDA: Let me see. It is true that the outcomes of de novo mutations are not as easy to predict, but we can usually gain insight based on previous knowledge. If we know the gene’s function or other mutations, that of course helps to make predictions. We can then test those predictions.
ALFRED: 
 but then you are again predicting based on comparison with previous outcomes! How much could you really predict if you had nothing but the genome and zero knowledge about outcomes from before? It’s a cool question.
MINDA: No previous data is a bit hypothetical. It would of course be more difficult to predict the effect of a given mutation. The more information I gather, the better I can predict the outcome.
PRAMESH: So, once you have seen an outcome for a given mutation, you can predict it—sometimes with 100% probability. But the point is: a mutation may play out unpredictably during development even if it always leads to the same outcome—with 100% probability.
AKI: Not clear.
PRAMESH: Well, if you run an algorithm or a machine with the same input twice and it produces the exact same output twice, then this is what we call deterministic. But even a deterministic system can function unpredictably, that’s the story of deterministic chaos. It just means there is no other way to find out what a given code produces at some later point in time other than running the full simulation; there is no shortcut to prediction.2
ALFRED: I remember the deterministic chaos stuff from the ’80s and ’90s, 
 did that ever go anywhere? It always sounded like giving a catchy, fancy name to something that nobody really understood.
AKI: Yeah, as Wonko the Sane said: if we scientists find something we can’t understand we like to call it something nobody else can understand.
3
MINDA: I don’t see where this is supposed to go. It seems to me that if a code always produces precisely the same outcome, then it is also predictable.
PRAMESH: Well, it’s not. Deterministic systems can be what is called mathematically ‘undecidable.’ Do you know cellular automata? People still study them a lot in artificial life research. Conway’s Game of Life is quite famous. Think of a very simple rule set to create a pattern. It turns out that after a few hundred iterations, new patterns emerge that could not have been predicted based on the simpl...

Table des matiĂšres