Optimality in Biological and Artificial Networks?
eBook - ePub

Optimality in Biological and Artificial Networks?

Daniel S. Levine, Wesley R. Elsberry, Daniel S. Levine, Wesley R. Elsberry

Share book
  1. 528 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Optimality in Biological and Artificial Networks?

Daniel S. Levine, Wesley R. Elsberry, Daniel S. Levine, Wesley R. Elsberry

Book details
Book preview
Table of contents
Citations

About This Book

This book is the third in a series based on conferences sponsored by the Metroplex Institute for Neural Dynamics, an interdisciplinary organization of neural network professionals in academia and industry. The topics selected are of broad interest to both those interested in designing machines to perform intelligent functions and those interested in studying how these functions are actually performed by living organisms and generate discussion of basic and controversial issues in the study of mind. The topic of optimality was chosen because it has provoked considerable discussion and controversy in many different academic fields. There are several aspects to the issue of optimality. First, is it true that actual behavior and cognitive functions of living animals, including humans, can be considered as optimal in some sense? Second, what is the utility function for biological organisms, if any, and can it be described mathematically? Rather than organize the chapters on a "biological versus artificial" basis or by what stance they took on optimality, it seemed more natural to organize them either by what level of questions they posed or by what intelligent functions they dealt with. The book begins with some general frameworks for discussing optimality, or the lack of it, in biological or artificial systems. The next set of chapters deals with some general mathematical and computational theories that help to clarify what the notion of optimality might entail in specific classes of networks. The final section deals with optimality in the context of many different high-level issues, including exploring one's environment, understanding mental illness, linguistic communication, and social organization. The diversity of topics covered in this book is designed to stimulate interdisciplinary thinking and speculation about deep problems in intelligent system organization.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Optimality in Biological and Artificial Networks? an online PDF/ePUB?
Yes, you can access Optimality in Biological and Artificial Networks? by Daniel S. Levine, Wesley R. Elsberry, Daniel S. Levine, Wesley R. Elsberry in PDF and/or ePUB format, as well as other popular books in Psicología & Historia y teoría en psicología. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2013
ISBN
9781134786459
I
WHAT IS THE ROLE OF OPTIMALITY?
1
Don’t Just Stand There, Optimize Something!
Daniel S. Levine
University of Texas at Arlington
Daniel Levine’s chapter, Don’t Just Stand There, Optimize Something!, attempts to give a general theory for how much influence optimization has on human decision making. Levine considers the roles of optimization both at the descriptive level (how do we make decisions in reality?) and the normative level (how should we make decisions?).
Levine compares actual human decision makiitg with self-actualization, Abraham Maslow’s description of optimal human mental functioning. He proposes a tentative neural network theory for self-actualization that posits an explanation for why it doesn’t always happen. A submodule of his network calculates a utility function of its present state, and another node (analogous to a function of the frontal lobes) imagines alternative states and calculates their utility functions. If an alternate state is seen as “better” than the current state, this generates “negative affect” which drives the network to seek a new, and presumably closer to optimal, state. But the strength of the network’s approach to a new state is regulated by a complex chemical transmitter system. If this strength is insufficient, the network can get “stuck in local minima” in the familiar fashion of back propagation networks.
Being stuck in a local minimum is not necessarily bad. It may be analogous to satisficing, the term coined by Herbert Simon for reaching the first decision that is good enough to satisfy current needs, even if it is known not to be optimal (a concept also discussed in the chapters by Golden and by Werbos). Also, Levine points out, as does Leven’s chapter, that rational optimization of all decisions may lead to spending too much time and effort on decisions ivhose consequences don’t merit this effort. Based on previous work of Pribram, he suggests that the frontal lobes, hippocampus, and amygdala combine into a system that decides which goals are worth how much effort to optimize. He makes a distinction, also made in Werbos’ chapter, between optimizing at “macro” and “micro” levels.
ABSTRACT
This chapter deals at a philosophical level with two questions about human cognitive functioning: (1) Do we always optimize some variable that provides an advantage to us? and (2) Should we always optimize some variable? The first question is answered with a resounding “No.” Some of the influence of irrational constructs, such as metaphors, on cognition is explored. Then a tentative neural network theory is proposed for self-actualization, an optimal state, partly rational and partly intuitive, that is achieved only a small portion of the time by most people and more consistently by a minority of people. The second question is left open: some situations are given whereby detailed rational strategies are counterproductive, but there still may exist a broadly normative utility function that combines reason and intuition.
1. INTELLECTUAL ISSUES
The conference on which this book is based has roots going back to the early 1970s. At that time many neural network theorists sought to explain human behavior broadly as maximizing positive reinforcement and/or minimizing negative reinforcement. The most important of these optimality theorists were Harry Klopf, Paul Werbos, and Gregory (now Gershom-Zvi) Rosenstein (the last two being contributors to this volume). Let us look at where these scholars derived their inspiration. Part of it came from analogy with economics, particularly microeconomics: just as consumers and producers are assumed to maximize profit, minimize cost, and so on, organisms maximize biological reinforcement, which is treated as a sort of “net income” (cf. Rosenstein, Chapter 19). But the inspiration for optimality also came from evolutionary theory. Ever since the age of Darwin, there has been a strong teleological itch among biologists, a tendency to see prevailing behavior as somehow justified from an evolutionary standpoint, as serving a purpose.
Yet in all disciplines (less in economics than anywhere else, cf. Leven, Chapter 3) there has been a countervailing tendency to see rationality, particularly human rationality, as flawed, to see Edgar Allan Poe’s imp of the perverse (Stedman & Woodberry, 1894) in some of the actions of biological organisms. How, using optimality principles alone, can we explain addictive gambling, neurotic self-punishment, sexual attraction to toxic personalities, election of obvious scoundrels to political office? The list goes on and on. The title of my chapter is actually a variant of one used in the commentary (on the article of Schoemaker, 1991) by Paelinck (1991), who in turn took it from a cartoon in which it is an exhortation from an American economics professor to his students. Sometimes, mathematical theories needed to justify behavior within a rubric of “optimizing something” lead to absurdly tortuous utility functions.
The debate over how much human behavior is rational goes on in every scientific and social scientific discipline, with major effects on the philosophical foundations of these fields (Cohen, 1981; Kyburg, 1983; Schoemaker, 1991). Sometimes (Jungermann, 1983), this debate has been couched in terms of optimism versus pessimism, with the believers in pervasive rationality being counted as optimists. But look at the optimality question from another viewpoint, that of the social reformer. If you are interested in ridding the world of unjust war, poverty, or environmental pollution, each of which is at least partly caused by human actions, you hope that these actions do not represent optimal human behavior. That is, people are capable of “better” than war or poverty or pollution. Hence, from the social reform viewpoint, Jungermann’s “optimists” become “pessimists,” and his “pessimists” become “optimists”!
Even from an evolutionary viewpoint, Stephen Gould has shown that evolution does not necessarily imply “progress.” Gould (1980, p. 50) reviewed two principles Darwin had propounded about nonadaptive biological change. One is that “organisms are integrated systems and adaptive change in one part can lead to nonadaptive modifications of other features.” The other is that “an organism built under the influence of selection for a specific role may be able, as a consequence of its structure, to perform many unselected functions as well.” Darwin disagreed with other biologists of his day who were stricter believers in optimality, such as Alfred Russel Wallace. He believed, as I do, that whereas evolution may lead to optimization of some functions, this process could have accidental by-products that are not always optimal (see Stork, Jackson, & Walker, Chapter 4, for a specific biological example). What Gould said about biological functions in general is particularly true of neuropsychological functions.
In another sense, evolutionary theory does not tell us the whole story about human choice. Natural selection only means that traits will be selected that promote survival (of the individual or of his or her genes). It does not mean that traits will be selected that enhance the quality of life in senses that most of us would agree on, the best use of human potential.1 In later sections, I develop a tentative neural theory of self-actualization, defined by Abraham Maslow (1968, 1972) as the state of optimal human potential. Maslow noted that self-actualization is achieved consistently by about 1% of the population and on rare occasions by most other people. This is far less often than would be predicted if evolution selected for a self-actualizing tendency.
Schoemaker (1991) asked what is the level at which the concept of optimality is meaningful. He asked whether optimality is “(1) an organizing principle of nature, (2) a set of relatively unconnected techniques of science, (3) a normative principle for rational choice and social organization, (4) a metaphysical way of looking at the world, or (5) something else still” (p. 205). The chapters in this volume vary widely in their viewpoints, but the largest segment seems to have arrived at a general consensus. The majority of authors herein, and of scientists in general, believe that optimality contains elements of both (1) and (3) of Schoemaker’s choices. It is an organizing principle of nature but not the organizing principle, that is, it does not point to a universal rule. The chapters in this volume by DeYong and Eskridge, Elsberry, Leven, and Werbos particularly point to optimization as a useful tool for understanding consciousness or intelligence, in spite of having major limitations. A system for vision, or cognition, or motor control may be optimal in its overall organization but suboptimal in parts, or vice versa.
In dynamical systems in general, and neural network systems in particular, the crucial distinction is between competing attracting states of the system. This includes the distinction, now already a cliché after less than 10 years in wide usage, between global and local minima of an energy (or cost, or error) function. Ironically, the bugbear of nonoptimal local minima now particularly haunts back propagation networks, which originated with Werbos’ (1974/1993) effort to link brain theory to the optimization rubrics of economics!
2. FRONTAL LOBE DAMAGE AS A PROTOTYPE OF NONOPTIMAL COGNITION
The hint that nonoptimality is pervasive in human cognition is important to those developing machines to perform higher-level cognitive functions, because those functions involve a mixture of reason and intuition. It suggests that although designers of such machines should study neuroscience and neuropsychology, they should not adhere slavishly to the buzzword of biological realism. This is because someone might devise an intelligent machine that is less vulnerable than our brains to, say, cognitive dissonance (Festinger, 1957) or conflict between reason and emotion. In fact, such a hypothetical machine might even be based on the same types of components as our brains are but with those components combined in a novel architecture. If so, as Lorenz (1966) and Werbos (Chapter 2) suggest, the long-awaited missing link between animals and a truly humane being might be ourselves!
Since the frontal lobes integrate sensory, semantic, affective, and motor systems among others (Pribram, 1991), theories of their function would seem to bear on the issue of optimality. There have been several recent neural network simulations of cognitive effects of frontal lobe damage (Bapi & Levine, 1994; Cohen, Dunbar, & McClelland, 1990; Dehaene & Changeux, 1989, 1991; Leven & Levine, 1987; Levine & Parks, 1992; Levine & Prueitt, 1989). These networks model behavioral circuits combining cognition, motivation, and reinforcement, in which frontal connections (with the limbic system, hypothalamus, thalamus, caudate nucleus, and perhaps midbrain) play a controlling role. I suggest that these frontal damage effects can be treated as prototypical examples of nonoptimal human cognitive functioning.
David Stork (personal communication) has objected that any lesioned system’s functioning is of course suboptimal and does not indicate the system’s normal processes. However, our models treat frontal damage as weakening, not breaking, a connection. This is because the frontal lobes provide the most direct link, but not the only link, between sensory areas of the cerebral cortex and affective areas of the limbic system and hypothalamus (Nauta, 1971). Hence, optimal cognitive function (which Levine, Leven, & Prueitt, 1992, compared to self-actualization) requires balance of activities and connection weights in many brain areas. This balance, I conjecture, is disrupted not only by focal brain damage but by many other contingencies, including bad education or maladaptive social customs (society’s “phobias” or “obsessive-compulsive neuroses”). Figure 1.1 shows the continuum of human cognitive function from least to most integrated.
The networks of Levine and Prueitt (1989) incorporated two generic frontal lesion effects: perseveration in formerly reinforcing behavior, and excessive attraction to novelty for its own sake. Many familiar human and social phenomena are analogs of these two types of effects. For example, one form of perseverative behavior is prejudice against a group of people because of an early bad experience. Sometimes, in fact, the prejudiced individual will base a habit of prejudice not on direct experience with Blacks, Jews, women, laborers, mathematicians, and so forth, but on what he or she has heard about the group. If that kind of conditioning is obtained from an entire social circle, or from influential individuals such as parents, it can override later, more favorable, direct experience with the group in question.2 One form of excessive novelty preference is following fads, whether in political beliefs, scientific outlook, or drug usage.
3. THE PROMISE AND SPECTER OF OUR METAPHORS
Lakoff and Johnson (1980) noted how much human thought is structured by metaphors. The metaphors we use are unconscious, frequently culturally based, and create whole systems of analogies that become embedded in our common language without our being aware of their source. One of their key examples was the metaphor “ARGUMENT IS WAR.” They gave the following examples of common American English phrases informed by that metaphor:
Your claims are indefensible.
He attacked every weak point in my argument.
His criticisms were right on target.
I demolished his argument.
I’ve never won an argument with him.
You disagree? Okay, shoot.
If you use that strategy, he’ll wipe you out.
He shot down all of my arguments, (p. 4; authors’ italics)
Lakoff and Johnson emphasized that the war metaphor is not the only possible way to view arguments. By contrast, they asked us to “Imagine a culture where an argument is viewed as a dance, the participants are seen as performers, and the goal is to perform in a balanced and aesthetically pleasing way. In such a culture, people would view arguments differently, experience them differently, carry them out differently, and talk about them differently.”
In much the manner that frontal patients on a card sorting test develop an unbreakable positive feedback loop between their habits and their decision criteria (Milner, 1964), people frequently develop hard-to-break positive feedback loops between their metaphors and their belief systems. For example, when in graduate school I had an argument with a roommate about equality between men and women. My roommate, whose cultural background was more sexist than mine, said at one point in the discussion, “But I should be the man in the house.” What was happening, I believe, is that he h...

Table of contents