
- 464 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
About this book
The mathematics employed by genetic algorithms (GAs)are among the most exciting discoveries of the last few decades. But what exactly is a genetic algorithm? A genetic algorithm is a problem-solving method that uses genetics as its model of problem solving. It applies the rules of reproduction, gene crossover, and mutation to pseudo-organism
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weāve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere ā even offline. Perfect for commutes or when youāre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Practical Handbook of Genetic Algorithms by Lance D. Chambers in PDF and/or ePUB format, as well as other popular books in Mathematics & Mathematics General. We have over one million books available in our catalogue for you to explore.
Information
Chapter 1
Shumeet Baluja
School of Computer Science Carnegie Mellon University
Artificial Neural Network Evolution: Learning to Steer a Land Vehicle
1.1Overview
1.2Introduction to Artificial Neural Networks
1.3Introduction to ALVINN
1.3.1Training ALVINN
1.4The Evolutionary Approach
1.4.1Population-Based Incremental Learning
1.5Task Specifics
1.6Implementation and Results
1.6.1Using a Task Specific Error Metric
1.7Conclusions
1.8Future Directions
Abstract
This chapter presents an evolutionary method for creating an artificial neural network based controller for an autonomous land vehicle. Previous studies which have used evolutionary procedures to evolve artificial neural networks have been constrained to small problems by extremely high computational costs. In this chapter, methods for reducing the computational burden are explored. Previous connectionist based approaches to this task are discussed. The evolutionary algorithrm used in this study, Population-Based Incremental Learning (PBIL), is a variant of the traditional genetic algorithm. It is described in detail in this chapter. The results indicate that the evolutionary algorithm is able to generalize to unseen situations better than the standard method of error backpropagation; an improvement of approximately 18% is achieved on this task. The networks evolved are efficient; they use only approximately half of the possible connections. However, the evolutionary algorithm may require considerably more computational resources on large problems.
1.1Overview
In this chapter, evolutionary optimization methods are used to improve the generalization capabilities of feed-forward artificial neural networks. Many of the previous studies involving evolutionary optimization techniques applied to artificial neural networks (ANNs) have concentrated on relatively small problems. This chapter presents a study of evolutionary optimization on a āreal-worldā problem, that of autonomous navigation of Carnegie Mellonās NAVLAB system. In contrast to the other problems addressed by similar methods in recently published literature, this problem has a large number of pixel based inputs and also has a large number of outputs to indicate the appropriate steering direction.
The feasibility of using evolutionary algorithms for network topology discovery and weight optimization is discussed throughout the chapter. Methods for avoiding the high computational costs associated with these procedures are presented. Nonetheless, evolutionary algorithms remain more computationally expensive than training by standard error backpropagation. Because of this limitation, the ability to train on-line, which may be important in many realtime robotic environments, is not addressed in this chapter. The benefit of evolutionary algorithms lies in their ability to perform global search; they provide a mechanism which is more resistant to local optima than standard backpropagation. In determining whether an evolutionary approach is appropriate for a particular application, the conflicting needs for accuracy and speed must be taken into careful consideration.
The next section very briefly reviews the fundamental concepts of ANNs. This material will be familiar to the reader who has had an introduction to ANNs. Section 1.3 provides an overview of the currently used artificial neural network based steering controller for the NAVLAB, named ALVINN (Autonomous Land Vehicle in a Neural Network) [16]. Section 1.4 gives the details of the evolutionary algorithm used in this study to evolve a neuro-controller; Population-Based Incremental Learning [4]. Section 1.5 gives the details of the task. Section 1.6 gives the implementation and results. Finally, Sections 1.7 and 1.8 close the chapter with conclusions and suggestions for future research.
1.2Introduction to Artificial Neural Networks
An Artificial Neural Network (ANN) is composed of many small computing units. Each of these units is loosely based upon the design of a single biological neuron. The models most commonly used are far simpler than their biological counterparts. The key features of each of these simulated neurons are the inputs, the activation function, and the outputs. A model of a simple neuron is shown in Figure 1.1. The inputs to each neuron are multiplied by connection weights giving a net total input. This net input is passed through a non-linear activation function, typically the sigmoid or hyperbolic tangent function, which maps the infinitely ranging (in theory) net input to a value between set limits. For the sigmoidal activation function, input values will be mapped to a point in (0,1) and for the hyperbolic tangent activation function, the input will be mapped to a value in (-1,1). Once the resultant value is computed, it can either be interpreted as the output of the network, or used as input to another neuron. In the study presented in this chapter, hyperbolic tangent activations were used.

Figure 1.1 The artificial neuron works as follows: the summation of the incoming (weights * activation) values is put through the activation function in the neuron. In the above shown case, this is a sigmoid. The output of the neuron, which can be fed to other neurons, is the value returned from the activation function. The xās can either be other neurons or inputs from the outside world.
Artificial neural networks are generally composed of many of the units shown in Figure 1.1, as shown in Figure 1.2. For a neuron to return a particular response for a given set of inputs, the weights of the connections can be modified. āTrainingā a neural network refers to modifying the weights of the connections to produce the individual output vector associated with each input vector.

Figure 1.2 A fully connected three layer ANN is shown. Each of the connections can change its weight independently during training.
A simple ANN is composed of three layers, the input layer, the hidden layer and the output layer. Between the layers of units are connections containing weights. These weights serve to propagate signals through the network. (See Figure 1.2.) Typically, the network is trained using a technique which can be thought of as gradient descent in the connection weight space. Once the network has been trained, given any set of inputs and outputs which are sufficiently similar to those on which it was trained, it will be able to reproduce the associated outputs by propagating the input signal forward through each connection until the output layer is reached.
In order to find the weights which produce correct outputs for given inputs, the most commonly used method for weight modification is error backpropagation. Backpropagation is simply explained in Abu-Mostafaās paper āInformation Theory, Complexity and Neural Networksā[1]:
ā¦the algorithm [backpropagation] operates on a network with a fixed architecture by changing the weights, in small amounts, each time an example Yi = f(xi) [where y is the desired output pattern, and xis the input pattern] is received. The changes are made to make the response of the network to Xi closer to the desired output, Yi⢠This is done by gradient descent, and each iteration is simply an error signal propagating backwards in the network in a way similar to the input that propagates forward to the output. This fortunate property simplifies the computation significantly. However, the algorithm suffers from the typical problems of gradient descent, it is often slow, and gets stuck in local minima.
If ANNs are not overtrained, after training, they should be able to generalize to sufficiently similar input patterns which have not yet been encountered. Although the output may not be exactly what is desired, it should not be a catastrophic failure either, as would be the case with many non-learning techniques. Therefore, in training the ANN, it is important to get a diverse sample group which gives a good representation of the input data which might be seen by the network during simulation. A much more comprehensive tutorial of artificial neural networks can be found in [12].
1.3Introduction to ALVINN
ALVINN is an artificial neural network based perception system which learns to control Carnegie Mellonās NAVLAB vehicles by watching a person drive, see Figure 1.3. ALVINNās architecture consists of a single hidden layer backpropagation network. The input layer of the network is a 30x32 unit two dimensional āretinaā which receives input from the vehicleās video camera, see Figure 1.4. Each input unit is fully connected to a layer of four hidden units which are in turn fully connected to a layer of 30 output units. In the simplest interpretation, each of the networkās output units can be considered to represent the networkās vote for a particular steering direction. After presenting an image to the input retina, and passing activation forward through the network, the output unit with the highest activation represents the steering arc the network believes to be best for staying on the road.

Figure 1.3 The Carnegie Mellon NAVLAB Autonomous Navigation testbed.

Figure 1.4 The ALVINN neural network architecture.
To teach the network to steer, ALVINN is shown video images from the onboard camera as a person drives and is trained to output the steering direction in which the person is currently steering. The backpropagation algorithm alters the strengths of connections between the units so that the network produces the appropriate steering response when presented with a video image of the road ahead of the vehicle.
Because ALVINN is able to learn which image features are important for particular driving situations, it has been successfully trained to drive in a wider variety of situations than other autonomous navigation systems which require fixed, predefined features (e.g., the roadās center line) for accurate driving. The situations ALVINN networks have been trained to handle include single lane dirt roads, single lane paved bike paths, two lane suburban neighborhood streets, and lined divided highways. In this last domain, ALVINN has successfully driven autonomously at speeds of up to 55 m.p.h., and for distances of over 90 miles on a highway north of Pittsburgh, Pennsylvania.
The performance of the ALVINN system has been extensively analyzed by Pomerleau [16][17][18]. Throughout testing, various architectures have been examined, including architectures with more hidden units and different output representations. Although the output representation was found to have a large impact on the effectiveness of the network, other features of the network architecture were found to yield approximately equivalent results [15][16]. In the study presented here, the output representation examined is the one currently used in the ALVINN system, a distributed representation of 30 units.
1.3.1Training ALVINN
To train ALVINN, the network is presented with road images as input and the corresponding correct steering direction as the desired output. The correct steering direction is the steering direction the human driver of the NAVLAB has chosen. The weights in the network are altered using the backpropagation algorithm so that the networkās output more closely corresponds to the target output. Training is currently done on-line with an onboard Sun SPARC-10 workstation.
Several modifications to the standard backpropagation algorithm are used to train ALVINN. ...
Table of contents
- Cover
- Title Page
- Copyright Page
- Preface
- Table of Contents
- Chapter 0: Multi-Niche Crowding for Multi-Modal Search
- Chapter 1: Artificial Neural Network Evolution: Learning to Steer a Land Vehicle
- Chapter 2: Locating Putative Protein Signal Sequences
- Chapter 3: Selection Methods for Evolutionary Algorithms
- Chapter 4: Parallel Cooperating Genetic Algorithms: An Application to Robot Motion Planning
- Chapter 5: The Boltzmann Selection Procedure
- Chapter 6: Structure and Performance of Fine-Grain Parallelism in Genetic Search
- Chapter 7: Parameter Estimation for a Generalized Parallel Loop Scheduling Algorithm
- Chapter 8: Controlling a Dynamic Physical System Using Genetic-Based Learning Methods
- Chapter 9: A Hybrid Approach Using Neural Networks. Simulation. Genetic Algorithms, and Machine Learning for Real-Time Sequencing and Scheduling Problems
- Chapter 10: Chemical Engineering
- Chapter 11: Vehicle Routin1 with Time Windows usin1 Genetic Algorithms
- Chapter 12: Evolutionary Algorithms and Dialogue
- Chapter 13: Incorporating Redundancy and Gene Activation Mechanisms in Genetic search for adapting to Non-Stationarv Environments.
- Chapter 14: Input Space Segmentation with a Genetic Algorithm for Generation of Rule Based Classifier Systems
- Appendix 1: An Indexed Bibliography of Genetic Algorithms
- Index