Hands-On Neuroevolution with Python
eBook - ePub

Hands-On Neuroevolution with Python

Build high-performing artificial neural network architectures using neuroevolution-based algorithms

  1. 368 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Hands-On Neuroevolution with Python

Build high-performing artificial neural network architectures using neuroevolution-based algorithms

About this book

Increase the performance of various neural network architectures using NEAT, HyperNEAT, ES-HyperNEAT, Novelty Search, SAFE, and deep neuroevolution

Key Features

  • Implement neuroevolution algorithms to improve the performance of neural network architectures
  • Understand evolutionary algorithms and neuroevolution methods with real-world examples
  • Learn essential neuroevolution concepts and how they are used in domains including games, robotics, and simulations

Book Description

Neuroevolution is a form of artificial intelligence learning that uses evolutionary algorithms to simplify the process of solving complex tasks in domains such as games, robotics, and the simulation of natural processes. This book will give you comprehensive insights into essential neuroevolution concepts and equip you with the skills you need to apply neuroevolution-based algorithms to solve practical, real-world problems.

You'll start with learning the key neuroevolution concepts and methods by writing code with Python. You'll also get hands-on experience with popular Python libraries and cover examples of classical reinforcement learning, path planning for autonomous agents, and developing agents to autonomously play Atari games. Next, you'll learn to solve common and not-so-common challenges in natural computing using neuroevolution-based algorithms. Later, you'll understand how to apply neuroevolution strategies to existing neural network designs to improve training and inference performance. Finally, you'll gain clear insights into the topology of neural networks and how neuroevolution allows you to develop complex networks, starting with simple ones.

By the end of this book, you will not only have explored existing neuroevolution-based algorithms, but also have the skills you need to apply them in your research and work assignments.

What you will learn

  • Discover the most popular neuroevolution algorithms – NEAT, HyperNEAT, and ES-HyperNEAT
  • Explore how to implement neuroevolution-based algorithms in Python
  • Get up to speed with advanced visualization tools to examine evolved neural network graphs
  • Understand how to examine the results of experiments and analyze algorithm performance
  • Delve into neuroevolution techniques to improve the performance of existing methods
  • Apply deep neuroevolution to develop agents for playing Atari games

Who this book is for

This book is for machine learning practitioners, deep learning researchers, and AI enthusiasts who are looking to implement neuroevolution algorithms from scratch. Working knowledge of the Python programming language and basic knowledge of deep learning and neural networks are mandatory.

Tools to learn more effectively

Saving Books

Saving Books

Keyword Search

Keyword Search

Annotating Text

Annotating Text

Listen to it instead

Listen to it instead

Section 1: Fundamentals of Evolutionary Computation Algorithms and Neuroevolution Methods

This section introduces core concepts of evolutionary computation and discusses particulars of neuroevolution-based algorithms and which Python libraries can be used to implement them. You will become familiar with the fundamentals of neuroevolution methods and will get practical recommendations on how to start your experiments. This section provides a basic introduction to the Anaconda package manager for Python as part of your environment setup.

This section comprises the following chapters:
  • Chapter 1, Overview of Neuroevolution Methods
  • Chapter 2, Python Libraries and Environment Setup

Overview of Neuroevolution Methods

The concept of artificial neural networks (ANN) was inspired by the structure of the human brain. There was a strong belief that, if we were able to imitate this intricate structure in a very similar way, we would be able to create artificial intelligence. We are still on the road to achieving this. Although we can implement Narrow AI agents, we are still far from creating a Generic AI agent.
This chapter introduces you to the concept of ANNs and the two methods that we can use to train them (the gradient descent with error backpropagation and neuroevolution) so that they learn how to approximate the objective function. However, we will mainly focus on discussing the neuroevolution-based family of algorithms. You will learn about the implementation of the evolutionary process that's inspired by natural evolution and become familiar with the most popular neuroevolution algorithms: NEAT, HyperNEAT, and ES-HyperNEAT. We will also discuss the methods of optimization that we can use to search for final solutions and make a comparison between objective-based search and Novelty Search algorithms. By the end of this chapter, you will have a complete understanding of the internals of neuroevolution algorithms and be ready to apply this knowledge in practice.
In this chapter, we will cover the following topics:
  • Evolutionary algorithms and neuroevolution-based methods
  • NEAT algorithm overview
  • Hypercube-based NEAT
  • Evolvable-Substrate HyperNEAT
  • Novelty Search optimization method

Evolutionary algorithms and neuroevolution-based methods

The term artificial neural networks stands for a graph of nodes connected by links where each of the links has a particular weight. The neural node defines a kind of threshold operator that allows the signal to pass only after a specific activation function has been applied. It remotely resembles the way in which neurons in the brain are organized. Typically, the ANN training process consists of selecting the appropriate weight values for all the links within the network. Thus, ANN can approximate any function and can be considered as a universal approximator, which is established by the Universal Approximation Theorem.
For more information on the proof of the Universal Approximation Theorem, take a look at the following papers:
  • Cybenko, G. (1989) Approximations by Superpositions of Sigmoidal Functions, Mathematics of Control, Signals, and Systems, 2(4), 303–314.
  • Leshno, Moshe; Lin, Vladimir Ya.; Pinkus, Allan; Schocken, Shimon (January 1993). Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks. 6 (6): 861–867. doi:10.1016/S0893-6080(05)80131-5. (https://www.sciencedirect.com/science/article/abs/pii/S0893608005801315?via%3Dihub)
  • Kurt Hornik (1991) Approximation Capabilities of Multilayer Feedforward Networks, Neural Networks, 4(2), 251–257. doi:10.1016/0893-6080(91)90009-T (https://www.sciencedirect.com/science/article/abs/pii/089360809190009T?via%3Dihub)
  • Hanin, B. (2018). Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv preprint arXiv:1710.11278. (https://arxiv.org/abs/1710.11278)
Over the past 70 years, many ANN training methods have been proposed. However, the most popular technique that gained fame in this decade was proposed by Jeffrey Hinton. It is based on the backpropagation of prediction error through the network, with various optimization techniques built around the gradient descent of the loss function with respect to connection weights between the network nodes. It demonstrates the outstanding performance of training deep neural networks for tasks related mainly to pattern recognition. However, despite its inherent powers, it has significant drawbacks. One of these drawbacks is that a vast amount of training samples are required to learn something useful from a specific dataset. Another significant disadvantage is the fixed network architecture that's created manually by the experimenter, which results in inefficient use of computational resources. This is due to a significant amount of network nodes not participating in the inference process. Also, backpropagation-based methods have problems with transferring the acquired knowledge to other similar domains.
Alongside backpropagation methods, there are very promising evolutionary algorithms that can address the aforementioned problems. These bio-inspired techniques draw inspiration from Darwin's theory of evolution and use natural evolution abstractions to create artificial neural networks. The basic idea behind neuroevolution is to produce the ANNs by using stochastic, population-based search methods. It is possible to evolve optimal architectures of neural networks, which accurately address the specific tasks using the evolutionary process. As a result, compact and energy-efficient networks with moderate computing power requirements can be created. The evolutionary process is executed by applying genetic operators (mutation, crossover) to the population of chromosomes (genetically encoded representations of ANNs/solutions) over many generations. The central belief is that since this is in biological systems, subsequent generations will be suited to withstand the generational pressure that's expressed by the objective function, that is, they will become better approximators of the objective function.
Next, we will discuss the basic concepts of genetic algorithms. You will need to have a moderate level of understanding of genetic algorithms.

Genetic operators

Genetic operators are at the very heart of every evolutionary algorithm, and the performance of any neuroevolutionary algorithm depends on them. There are two major genetic operators: mutation and crossover (recombination).
In this chapter, you will learn about the basics of genetic algorithms and how they differ from conventional algorithms, which use error backpropagation-based methods for training the ANN.

Mutation operator

The mutation operator serves the essential role of preserving the genetic diversity of the population during evolution and prevents stalling in the local minima when the chromosomes of organisms in a population become too similar. This mutation alters one or more genes in the chromosome, according to the mutation probability defined by the experimenter. By introducing random changes to the solver's chromosome, mutation allows the evolutionary process to explore new areas in the search space of possible solutions and find better and better solutions over generations.
The following diagram shows the common types of mutation operators:
Types of mutation operators
The exact type of mutation operator depends on the kind of genetic encoding that's used by a specific genetic algorithm. Among the various mutation types we come across, we can distinguish the following:
  • Bit inversion: The randomly selected bit, which is inverted (binary encoding).
  • Order change: Two genes are randomly selected and their p...

Table of contents

  1. Title Page
  2. Copyright and Credits
  3. Dedication
  4. About Packt
  5. Contributors
  6. Preface
  7. Section 1: Fundamentals of Evolutionary Computation Algorithms and Neuroevolution Methods
  8. Overview of Neuroevolution Methods
  9. Python Libraries and Environment Setup
  10. Section 2: Applying Neuroevolution Methods to Solve Classic Computer Science Problems
  11. Using NEAT for XOR Solver Optimization
  12. Pole-Balancing Experiments
  13. Autonomous Maze Navigation
  14. Novelty Search Optimization Method
  15. Section 3: Advanced Neuroevolution Methods
  16. Hypercube-Based NEAT for Visual Discrimination
  17. ES-HyperNEAT and the Retina Problem
  18. Co-Evolution and the SAFE Method
  19. Deep Neuroevolution
  20. Section 4: Discussion and Concluding Remarks
  21. Best Practices, Tips, and Tricks
  22. Concluding Remarks
  23. Other Books You May Enjoy

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Hands-On Neuroevolution with Python by Iaroslav Omelianenko in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.