Chaos theory in Psychology and the Life Sciences
eBook - ePub

Chaos theory in Psychology and the Life Sciences

Robin Robertson, Allan Combs, Robin Robertson, Allan Combs

Share book
  1. 416 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Chaos theory in Psychology and the Life Sciences

Robin Robertson, Allan Combs, Robin Robertson, Allan Combs

Book details
Book preview
Table of contents
Citations

About This Book

This book represents the best of the first three years of the Society for Chaos Theory in Psychology conferences. While chaos theory has been a topic of considerable interest in the physical and biological sciences, its applications in psychology and related fields have been obscured until recently by its complexity. Nevertheless, a small but rapidly growing community of psychologists, neurobiologists, sociologists, mathematicians, and philosophers have been coming together to discuss its implications and explore its research possibilities. Chaos theory has been termed the first authentic paradigm shift since the advent of quantum physics. Whether this is true or not, it unquestionably bears profound implications for many fields of thought. These include the cognitive analysis of the mind, the nature of personality, the dynamics of psychotherapy and counseling, understanding brain events and behavioral records, the dynamics of social organization, and the psychology of prediction. To each of these topics, chaos theory brings the perspective of dynamic self-organizing processes of exquisite complexity. Behavior, the nervous system, and social processes exhibit many of the classical characteristics of chaotic systems -- they are deterministic and globally predictable and yet do not submit to precise predictability. This volume is the first to explore ideas from chaos theory in a broad, psychological perspective. Its introduction, by the prominent neuroscientist Walter Freeman, sets the tone for diverse discussions of the role of chaos theory in behavioral research, the study of personality, psychotherapy and counseling, mathematical cognitive psychology, social organization, systems philosophy, and the understanding of the brain.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Chaos theory in Psychology and the Life Sciences an online PDF/ePUB?
Yes, you can access Chaos theory in Psychology and the Life Sciences by Robin Robertson, Allan Combs, Robin Robertson, Allan Combs in PDF and/or ePUB format, as well as other popular books in Psychologie & Histoire et théorie en psychologie. We have over one million books available in our catalogue for you to explore.

Information

Year
2014
ISBN
9781317780076

I
INTRODUCTION TO CHAOS THEORY

1
Chaos Theory and the Relationship Between Psychology and Science

Robin Robertson
Issues usually become critical for philosophy long before they impinge on other fields of thought. Philosophy is the mother of the sciences; initially, each was merely an area of interest for philosophy, then each in turn grew independent enough to call itself a separate field. More recently, each of the sciences has fragmented into myriads of tinier subfields. Because the sciences disavow their philosophic underpinnings, each in turn repeats historical philosophic battles. In particular, when the youngest of philosophy’s children, psychology, came into existence in the 19th century, it prided itself on following the methods of its older siblings, the “hard” sciences. It was especially anxious to disavow any “philosophy” in its methods. This chapter traces psychology’s roots in the sciences from the Renaissance to the 20th century and shows the unnatural limits this has placed on psychology. Finally, it briefly describes how chaos theory, which provides a broader scope for all fields of science, offers unique possibilities for unifying psychology.

THE RENAISSANCE DAWNS AND CREATIVITY REAWAKENS

When a new world view captures the imagination, a rich outpouring of creativity occurs in all areas of life. The Renaissance was such a time. For the first time in the Western World, we began to realize that we were not only God’s creation, but creators ourselves. With this realization we were free in a way we hadn’t been free since the golden age of Greek philosophy. Although the growth of Christianity had been the greatest unifying force in the history of the Western World, it effectively brought an end to speculative thought about nature. Throughout the Middle Ages, scholastic philosophy instead perfected the analytic methods of Plato and Socrates. Scholastic philosophy proceeded from religious dogma rather than from observed fact, thus the beginnings of science were set back many centuries. During the Middle Ages, God’s word was considered a better guide than human experience or reason.
In contrast, the Renaissance ideal was expressed in statements by Leonardo da Vinci (1452–1519) such as “Experience never errs; it is only your judgments that err by promising themselves such as are not caused by your experiments,” or “all our knowledge has its origin in our perceptions” (Richter, 1970, p. 288). Leonardo da Vinci was able to combine this belief in the power of experience with a belief in God by a changing view of God. Leonardo da Vinci addressed his God as “O admirable impartiality of Thine, Thou first Mover; Thou hast not permitted that any force should fail of the order or quality of its necessary results” (Richter, 1970, p. 285).
God had created a world of necessity, and it was our responsibility to use reason to discover the rules that governed that world. Leonardo da Vinci said that “the senses are of the earth; Reason stands apart in contemplation” (Richter, 1970, p. 287). Once that step had been taken, it was inevitable that we would eventually turn reason upon itself and try to describe the nature of the mind.

COPERNICUSAND THE OBSERVATIONAL METHOD

At roughly the same point in time, Nicholas Copernicus (1473–1543) stood apart in contemplation of the universe. Before Copernicus, the earth was assumed to be the central object in the universe, eternally fixed and unmoving. Ptolemy (2nd century a.d.) had speculated that a series of clear, perfectly formed, nesting spheres surrounded the earth, and on those spheres were the sun, the planets, and the stars. Because astronomical observations are critical for agriculture, medieval man knew a great deal about the actual positions and movements of the heavenly bodies. Increasingly, calculations based on Ptolemy’s perfect spheres didn’t fit those observations; more and more complex rationalizations had to be made in order to preserve earth’s central position. Copernicus had the brilliant realization that perhaps the movement was, in part, the perception of the viewer. Perhaps the earth was moving around the sun.
The scientific method is now so taken for granted that it is hard to realize that it is not self-evident. It was indisputable to Aristotle that heavier objects fall faster than lighter objects. For the next 1,900 years (!), Aristotle’s statement was regarded as so self-evident that it was never tested. It was only in the 17th century that Galileo tested the theory and found that it was false. It had just never occurred to anyone before that such self-evident facts might be wrong and needed to be tested. Without that realization, observation had to be subsumed within theory and dogma. Copernicus’ theory was the first intimation that perhaps the nature of reality depended on the position of the observer, a view that Einstein was to make so central in his Theory of Relativity. In a Copemican world, our observations and conclusions became central because, in a world of flux and movement, everything depended on the observer (De Santillana, 1956).
This new method of thought led to the accumulation of more detailed information about the outer world than had been added in the previous eighteen hundred years, since the end of the golden age of Greek philosophy. The need to deal with this new data in a systematic way led to the creation of an explicit scientific method in the 17th century.

GOD SAID: LET NEWTON BE, AND ALL WAS LIGHT!

When Isaac Newton was born in 1642, science as we know it was still a small thing, exciting to those who could see its possibilities, but little known otherwise. When Newton died in 1727, science was the dominant force in human thought, and he was the primary cause for that change in status. In poet Alexander Pope’s famous words, “Nature and Nature’s laws lay hid in night; God said, Let Newton be! and all was light” (Gamow 1961, p. 51).
The 17th century was a time, in some ways, like our own, an “interesting time,” a time of change and unpredictability, when many contradictory ideas fought for supremacy. With the end of the absolute dominance that religion and the “ancients” (the Middle Ages’ most characteristic term for the great Greek thinkers) had previously had over Western thought, a vacuum was left, waiting to be filled by something new. Newton, with his Optiks and Principia, seemed to his contemporaries to have explained all of nature. Before Newton, there were speculations; after Newton, there were laws! Newton’s laws of nature explained motion, force, and light in straightforward ways that lent themselves to practical application. His laws concerned material particles, their motion, and their interaction. His was a world of absolutes: absolute space and time and perfect, indivisible particles moving in that absolute space and time. In Newton’s words from the Principia, “Absolute space, in its own nature, without relation to anything external, remains always similar and immovable. Absolute, true, and mathematical time, in itself, and from its own nature, flows equably without relation to anything external” (Gamow, 1961, p. 174).
It should be obvious that such a world is a construct of thought. Absolute space and time are concepts that Newton used in order to develop general theories of nature. Those general theories could then be applied to particular cases. The power of Newton’s concept of absolute space and time is less in its possible truth than in its broad utility.
Perhaps of equal importance with Newton’s laws was the development of practical mathematical tools with which those laws could be applied to nature. First came Rene Descartes’ discovery of analytic geometry early in the 17th century. Euclid’s geometry had stood alone as the first and, for nearly 2,000 years, the only known complete, self-consistent scientific system. Descartes had the brilliant realization that geometric locations could be represented by numeric coordinates, much as we can today identify any location in a city by a pair of crossing streets. Using this coordinate method, geometric problems could be transformed into numeric problems. A great jump in abstraction had taken place.
Building on Descartes’ method, Newton and German mathematician and philosopher, Gottfried Wilhelm Leibniz (1646–1716; often called “the last universal man”), independently developed calculus. Although analytic geometry could present a numeric snapshot of physical reality, it couldn’t deal with irregularity or change. As far back as the Babylonians, mathematicians had approximated the area of an irregular shape by covering it with a large number of shapes whose area they did know. Newton and Leibniz saw that, if the number of such covering shapes was extended infinitely, an exact area could be found. Calculus was a method of extending this approximation method to infinity. Similarly, calculus could calculate rates of change of virtually any quantity that could be described in analytic geometric terms. Calculus provided physical science, especially physics and astronomy, with a tool of incredible versatility. Bishop George Berkeley, whose philosophy of idealism is discussed later, claimed “… the method of fluxions [i.e., Newton’s term for calculus J is the general key by help whereof the modern mathematicians unlock the secrets of Geometry, and consequently of Nature” (Bell, 1979, p. 90).
After Newton, science began to supplant philosophy and religion as the dominant force in human thought. As psychology began to emerge in embryonic form over the next three centuries, it was to science that it turned for its model, not to philosophy. Although this had great advantages for the eventual emergence of psychology as a separate science in the 19th century, it also left psychology with some critical areas of ignorance, as is seen later.

THE FIRST EMPIRICIST AND THE ASSAULT ON EMPIRICISM

Influenced by his friend Newton, philosopher John Locke (1632–1704) first voiced the empiricist’s creed, a philosophy that, in many ways, is still implicit in the view of reality shared by too many scientists. Locke described the human mind as an initially empty vessel that gradually accumulates separate and distinct particles called ideas, which derive either directly or indirectly from sensory experience. Locke was trying to apply Newton’s view of the material world as closely as possible to the mind. Although full of difficulties, Locke perfectly expressed the spirit of the new age of science; his views are representative of the mainstream of scientific thought prior to the 20th century (Berlin, 1956, pp. 30–112).
Problems surfaced when two brilliant philosophers—Bishop George Berkeley (1685–1753) and David Hume (1711–1776)—took Locke’s ideas to their logical conclusions. Berkeley agreed with Locke’s assertion that all our ideas are derived from sensory experience, but he went one step further and said that it was, therefore, nonsense to speak of a physical world separate from our perceptions. As far as each of us is aware, there is no world unless we think of it. Because Berkeley was deeply religious, however, he avoided total solipsism by arguing that the world must necessarily exist, because it always exists in God’s mind. This religious answer satisfied few who found his basic argument that we can never prove the existence of an outer world inordinately difficult to resolve (Berlin, 1956, pp. 115–161).
Although Berkeley denied the existence of the material world, David Hume denied causality. He pointed out that, although we might assert that one event caused another, all we really know is that the two events are roughly contiguous in time and space. It is only from habit that we assert that one event caused the other—there is no logical and necessary connection between the two events. “The sole criterion of necessary truth, according to Hume is the law of non-contradiction. If a proposition cannot be denied without contradiction, it is necessarily true” (Aiken, 1956, p. 32).
The “sole criterion of necessary truth” is what, a century earlier, Leibniz had called analytic judgment; that is, where the conclusion is contained in the subject. But the conclusion that one event caused another is only a synthetic judgment; that is, an observation about the outer world. Leibniz was the first to specifically identify these as separate types of judgment (alternately termed a priori and a posteriori). An a posteriori, synthetic judgment can never have the necessity of an a priori, analytic judgment. But all that can be known of the world is a posteriori, derived from experience. Hume’s argument seemed unassailable. If there could be no logical necessity in any judgment about the outer world, anything could happen at any time. Hume’s argument had to be answered by philosophers, or philosophy was at a dead end. And, of course, it is the assumption of such necessary causality that forms the core of all of Newton’s Laws, of all science prior to the 20th century (Berlin, 1956, pp. 162–260).
Immanuel Kant (1724–1804) inaugurated modern philosophy with his answer in his Critique of Pure Reason in 1781. Kant agreed that analytic truth and synthetic truth are indeed separate and distinct. It is within the human mind that both are experienced, thus Kant argued that there also existed a third category of judgments that were neither analytic nor synthetic. There is an actual world out there that we observe, but we can see it only through the lens of our own mental perceptions. In effect, Kant was arguing that human psychology should be the most important of all the sciences, since our innate psychological makeup inherently colors all observations of nature and all logical conclusions we make about those observations. We might never be able to experience das ding an sich (i.e., the thing in itself), but our minds are themselves structured much as the world is structured and contain necessarily true categories with which we perceive the world. This was an argument that would have been unthinkable before the Renaissance and even before the birth of the science. Kant’s argument changed the direction of philosophy to a degree comparable only to that of Plato over two millennia earlier. Unfortunately, it had little or no impact on science.
Scientists felt that Newton’s Laws, by their very existence, proved an effective counterargument to Berkeley and Hume. Although Berkeley and Hume denied that man could ever speak with necessity about the psychical world, Newton seemed to have done just that. Meanwhile, it was sufficient for scientists to propose provisional theories about the world within the framework of Newton’s Laws. As evidence came along that the theories didn’t answer, well then, modify the theories. If this presented logical difficulties, they would eventually be resolved. It was a wonderfully pragmatic way to deal with reality. And because it was so inordinately successful, who was to argue.

THE BIRTH OF ASSOCIATIONISM

In Scotland, a contemporary of Hume’s, Thomas Reid (1710–1796), founded a school of philosophy that advocated “common sense” and “instinct” and threw out the whole argument. If Berkeley and Hume denied the existence of physical reality, then their arguments weren’t wor...

Table of contents