1 Introduction
Preamble: Nature has gained a huge variety of complex life forms which populate numerous biological niches all over the earth. The principles of natural selection, which are believed to be the main responsible forces driving the evolution of life on earth, seem to be a powerful mechanism capable of generating complexity in many different facets. But can artificial evolutionary approaches such as Evolutionary Robotics, which significantly simplify the biological mechanisms and work with population sizes and time periods incomparable to those in nature, also create truly complex structures? And can these structures, once evolved, be “reverse engineered” to provide an insight into their functioning? These are two major questions Evolutionary Robotics is dealing with today.
Robotics is a well-established field of research continuing to yield robotic systems which are practically utilized in a wide variety of areas. Among these are industrial applications which typically use robots to perform tasks requiring great precision, speed, power or endurance (e. g., welding, painting, assembly); disaster control and military applications where robots are frequently deployed to do jobs that are hazardous to people (e. g., defusing bombs, exploring shipwrecks) or as unmanned (aerial) vehicles with various objectives; medical applications among which the most interesting are today located in the area of surgery assistance; and service or domestic applications where robots perform services to support people in everyday tasks (e. g., vacuum cleaning, wiping) or in special situations such as assistance in elder care. In many of these areas robots have already reached virtual perfection carrying out the task they have been designed for. This is facilitated by an extensive body of knowledge in the fields of electrical engineering, kinematics and mechanics, and by a growing experience in the design of robots for a number of specific applications.
However, there is an even bigger class of potential robot applications which are desirable and apparently realizable in principle, but which today still hold requirements that cannot be satisfied. Such requirements can be of a rather technical nature meaning, for example, that parts of a robot would have to be manufactured smaller (motors, sensors or the energy supply) or that the required motoric precision is greater than current technology is capable to provide. But beyond that, there are substantial unresolved issues concerning the programming of robots. For a great number of desired applications, today’s algorithmic and software engineering techniques are insufficient to yield the according robotic behaviors. Especially when dealing with autonomous robots, even seemingly simple behaviors that are easy to describe on a high level in terms of “what a robot should do” can be hard to encode in a robot’s programming language. This problem arises from the difficulty of capturing and interpreting correctly a robot’s “world view”, which is given by its sensory perception and inner state, and of foreseeing the different situations a robot might get into, particularly when interacting with other robots or living beings. Conversely, relatively simple behavioral code may cause robots to produce behavior which looks complex for an outside observer. Consequently, it is usually hard to decide from bare observation of a robot in specific situations what general type of behavior it is programmed to perform. This means that even when a behavioral program exists which (apparently) lets a robot perform a desired behavior correctly, it may be impossible to prove for that behavior that it actually has certain desired or undesired properties.
While this has long been known to be true from an algorithmic point of view (at least for any program written in a Turing-complete programming language), Valentino Braitenberg showed in 1984 that it can even hold for a single autonomous robot controlled by a program from a class of simple reactive functions, far from being Turing-complete [17]. Equipped with a few light or distance sensors and two wheels only, Braitenberg connected (mostly in thought experiments) the sensory inputs directly with the wheel motors in specific ways. In doing so, he was able to find examples of behaviors which, from the outside, give the impression of complex intentions or “feelings” such as love, aggression, foresight and optimism, but are in fact extremely simple to describe algorithmically. Of course, when knowing the underlying algorithms, such behaviors would not be considered complex, and on the other hand there exists a huge variety of complex behaviors for which no simple algorithms are known today.
In a robot swarm, i. e., a collection of at least two (usually many) interacting autonomous robots, the discrepancy between an observed or desired behavior and the according algorithmic description becomes even greater. There, the correct interpretation of sensory information, which is usually rapidly changing and impaired by interfering sensors of other robots, as well as the understanding of swarm dynamics on a theoretical level still involve many unresolved problems. A classic engineering approach for creating programs for a swarm of robots might involve (1) describing the swarm behavior as the result of interactions among individual behaviors, and (2) encoding the individual behaviors into controllers. While the second step is essentially the above-discussed problem of programming single robots, the first step contains additional complexity in decomposing a process that is a result of dynamical interactions among its sub-components. Altogether, it seems hard to imagine a general approach, suitable for a large class of desired behaviors, to be based on this or any other engineering process known so far.
On the other hand, desired applications for autonomous robots have inspired the fantasy of science fiction writers for a long time, and, used wisely, such technology could improve the lives of many people in various ways. Applications range from rather simple tasks such as cleaning and other home service purposes or Braitenberg’s behaviors mentioned above to sophisticated tasks such as searching for objects with specific properties in an unknown and complex environment, exploring unknown areas in a group by creating and exchanging map information, or recognizing and manipulating objects which are arbitrarily located and oriented in space in an elaborate way. These rather technical behaviors are essential for ambitious applications such as, for example, searching and rescuing people in a disaster area, or the even more challenging application of floating through a human body with the aim of curing diseases. An additionally aggravating property of these and many other applications is that they are implicitly decentralized. This means that the robots have no or little help from “outside”, such as a central computer providing them with maps, their current position etc. Instead, they have to base all their decisions on local observations and within-swarm communication only.
A common approach for finding decentralized strategies suitable to solve tasks in a robot swarm is to observe swarm-like populations in nature [113]. For example, populations of ants or bees have evolved adaptive behaviors emerging from simple local interactions with their environment and each other, and they are usually close to solving their biological challenges in a nearly optimal way. By a careful analysis of the behavior of natural swarms, local rules for collective behavior can be extracted in some cases [14]. One of the most prominent examples of successful behavior extraction from nature is the well-known field of Ant Colony Optimization which is based on (a rigorously abstracted version of) the capability of ants to find shortest paths from a nest to a foraging place [57]. However, there is no guarantee that the behavior of a natural swarm can be understood sufficiently well to successfully transfer the according rules to an artificial swarm. Moreover, there are many desirable tasks for artificial swarms which do not have a related counterpart in nature. Additionally, generating programs manually or by imitating the behavior of living beings is challenging and expensive, and it can be infeasible in many cases.
It is not foreseeable if robot behaviors as complex as the ones described above will some day be programmable manually using a structured engineering process. Then again, in complex or dynamic environments it might even be disadvantageous to provide a robot with a monolithic, fixed program. In this context, it seems reasonable to introduce a flexible and adaptive component into a robot’s behavior which lets the robot learn while exploring an environment and base decisions on past observations. In the last two decades various approaches have been proposed which provide robots with a high-level description of the task they are supposed to perform, omitting a detailed description of the environment or of how exactly to perform the task. The robot, in turn, has to figure out a way of solving the task in the environment it is placed into. In some sense the robot is supposed to program itself on the low level of atomic actions while comparing the results to the high-level description of a desired behavior.
This concept is followed in the fields of Evolutionary Robotics (ER) and Evolutionary Swarm Robotics (ESR). Using mechanisms inspired by natural evolution, ER and ESR in a sense generalize the above-mentioned principle of observing specific biological populations to mimicking the overall process of “problem solving” in nature. The potentials of this idea are evident in any living animal organism, be it as simple as a fly or a spider – not to mention higher animals and humans. These “natural robots” emerged from evolution to be capable of performing highly complex decisions and actions following their desire of living fertile – or even “happily” – to adulthood. All of them are incomparably more complex than any artificial robot to be produced anytime in a foreseeable future. However, before real advantage can be taken from this powerful principle, there is still a knowledge gap to close which in its essence can be summarized as the following question: (how) can the complexity of natural evolution be sufficiently abstracted to provide a technology feasible for human roboticists by still being capable of evolving sophisticated robotic systems?
1.1 Evolutionary Robotics and Evolutionary Swarm Robotics
The basic ideas underlying ER have originally been borrowed from the fields of Artificial Intelligence (AI) [168] and Evolutionary Computation (EC) [35], [77], [111]. However, one of the first researchers who thought of an evolutionary process driving the generation of control systems of artificial organisms was Alan Turing who suggested this idea as early as 1950 [193]. In the years 1992 and 1993, the first experiments on artificial evolution of autonomous robots were reported by a team at the Swiss Federal Institute of Technology in Lausanne, a team at the University of Sussex at Brighton, and a team at the University of Southern California. The success and potentials of these studies triggered a whole new activity in labs across Europe, Japan, and the United States. Since 1993 these techniques have been summarized in the field of ER [28],[142], which is considered to roughly capture all techniques for the automatic creation and programming of autonomous robots which are inspired by the Darwinian principle of selective reproduction of the fittest [32].
In ER, robots are considered to be autonomous organisms that develop a control system on their own in close interaction with their environment and usually without human intervention. Some ER approaches also include the evolution of a body configuration for robots, in addition to or instead of a behavioral evolution. The ER methods share important characteristics with bi...