The Self-Assembling Brain
eBook - ePub

The Self-Assembling Brain

How Neural Networks Grow Smarter

Peter Robin Hiesinger

Share book
  1. 296 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Self-Assembling Brain

How Neural Networks Grow Smarter

Peter Robin Hiesinger

Book details
Book preview
Table of contents
Citations

About This Book

What neurobiology and artificial intelligence tell us about how the brain builds itself How does a neural network become a brain? While neurobiologists investigate how nature accomplishes this feat, computer scientists interested in artificial intelligence strive to achieve this through technology. The Self-Assembling Brain tells the stories of both fields, exploring the historical and modern approaches taken by the scientists pursuing answers to the quandary: What information is necessary to make an intelligent neural network?As Peter Robin Hiesinger argues, "the information problem" underlies both fields, motivating the questions driving forward the frontiers of research. How does genetic information unfold during the years-long process of human brain development—and is there a quicker path to creating human-level artificial intelligence? Is the biological brain just messy hardware, which scientists can improve upon by running learning algorithms on computers? Can AI bypass the evolutionary programming of "grown" networks? Through a series of fictional discussions between researchers across disciplines, complemented by in-depth seminars, Hiesinger explores these tightly linked questions, highlighting the challenges facing scientists, their different disciplinary perspectives and approaches, as well as the common ground shared by those interested in the development of biological brains and AI systems. In the end, Hiesinger contends that the information content of biological and artificial neural networks must unfold in an algorithmic process requiring time and energy. There is no genome and no blueprint that depicts the final product. The self-assembling brain knows no shortcuts.Written for readers interested in advances in neuroscience and artificial intelligence, The Self-Assembling Brain looks at how neural networks grow smarter.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Self-Assembling Brain an online PDF/ePUB?
Yes, you can access The Self-Assembling Brain by Peter Robin Hiesinger in PDF and/or ePUB format, as well as other popular books in Biological Sciences & Neuroscience. We have over one million books available in our catalogue for you to explore.

Information

1

Algorithmic Growth

1.1

Information? What Information?

The Second Discussion: On Complexity

AKI (THE ROBOTICS ENGINEER): Okay, this was weird. Not sure it adds up. Let’s just start with one problem: the neat vs scruffy discussion was really about the culture of people working in AI in the ’70s and ’80s. It’s not a scientific concept. These words are not properly defined, and their meaning changed with time. I know them to describe Minsky vs McCarthy, the scruffy ‘hack’ versus formal logic—but both worked on symbol processing AI, not neural nets. Only later did people pitch symbol processing vs neural nets using the same words—how does that make sense? It was also too much biology for me. Half of it was the history of this guy Sperry.
ALFRED (THE NEUROSCIENTIST): Well, before ‘this guy Sperry’ people apparently thought development only produced randomly wired networks, and all the information enters through learning. That’s huge. Sperry marks a transition, a step artificial neural networks apparently never made! I liked the idea of the shared problem. It’s interesting that the early computer people thought it just had to be random because of the information problem of real brains. And even Golgi thought there must be a whole network right from the start, the famous ‘reticular theory.’ How does a biologist or an AI programmer today think the information gets into the neural network of the butterfly?
MINDA (THE DEVELOPMENTAL GENETICIST): You need precise genetic information to build it and make it work. I think it is conceptually known how the information got into the network: it’s in the butterfly’s genes. There may be some loss of precision during development, but a neural circuit that works has to be sufficiently precise to do so.
PRAMESH (THE AI RESEARCHER): That sounds like genetic determinism to me. Of course there is precise information in the genes, but that’s not the same as the information that describes the actual network. We need to look at information at different levels. It’s similar in our AI work: We first define a precise network architecture, learning rules, etc., there we have total control. But the network has the ability to learn—we feed it a lot of information, and we may never get to know how it stored and computed that information really. Here you have less control and really don’t want to have more. An unsupervised approach will even find things you never told it to find in the first place.1
ALFRED: I like that. I also liked that Minsky and Rosenblatt both built their networks with random connections, not precise network architecture … and those are the things that worked.…
PRAMESH: Yes, randomness and variability can make outcomes robust, but also unpredictable. In our computational evolution experiments, random processes are required to make evolution flexible and robust, but we can never predict the solution the simulation will find. It’s the same with how an artificial neural network learns. It’s a question of levels: you can have control at a lower level without having control over how the system operates at a higher level. This can actually be true even if a system is completely deterministic, without randomness or variability. Maybe that’s how we should think about genes.
MINDA: As I said before, the genes contain the information to develop the network. In your AI work, you feed additional information for the network to learn. That’s a key difference. A developing biological neural network forms precise connectivity based on nothing but the genetic program. If there is no environmental contribution, how can there be anything in the network that was not previously in the genes?
PRAMESH: Can you predict how a mutation in a gene will play out during development?
MINDA: Of course. We know hundreds, probably thousands, of developmental disease mutations where we know the outcome.
AKI: That’s cheating! You only know the outcome because you’ve seen it. Could you predict the outcome if you had never seen it before?
MINDA: Let me see. It is true that the outcomes of de novo mutations are not as easy to predict, but we can usually gain insight based on previous knowledge. If we know the gene’s function or other mutations, that of course helps to make predictions. We can then test those predictions.
ALFRED: … but then you are again predicting based on comparison with previous outcomes! How much could you really predict if you had nothing but the genome and zero knowledge about outcomes from before? It’s a cool question.
MINDA: No previous data is a bit hypothetical. It would of course be more difficult to predict the effect of a given mutation. The more information I gather, the better I can predict the outcome.
PRAMESH: So, once you have seen an outcome for a given mutation, you can predict it—sometimes with 100% probability. But the point is: a mutation may play out unpredictably during development even if it always leads to the same outcome—with 100% probability.
AKI: Not clear.
PRAMESH: Well, if you run an algorithm or a machine with the same input twice and it produces the exact same output twice, then this is what we call deterministic. But even a deterministic system can function unpredictably, that’s the story of deterministic chaos. It just means there is no other way to find out what a given code produces at some later point in time other than running the full simulation; there is no shortcut to prediction.2
ALFRED: I remember the deterministic chaos stuff from the ’80s and ’90s, … did that ever go anywhere? It always sounded like giving a catchy, fancy name to something that nobody really understood.
AKI: Yeah, as Wonko the Sane said: if we scientists find something we can’t understand we like to call it something nobody else can understand.…3
MINDA: I don’t see where this is supposed to go. It seems to me that if a code always produces precisely the same outcome, then it is also predictable.
PRAMESH: Well, it’s not. Deterministic systems can be what is called mathematically ‘undecidable.’ Do you know cellular automata? People still study them a lot in artificial life research. Conway’s Game of Life is quite famous. Think of a very simple rule set to create a pattern. It turns out that after a few hundred iterations, new patterns emerge that could not have been predicted based on the simpl...

Table of contents