THE MYTH OF CONTROL
Complex versus Linear Systems
“The control of nature
is a phrase conceived in arrogance, born of the Neanderthal age of biology and philosophy . . .” —Rachel Carson1
y first exposure to computers based on microchip technology was in 1980. Jimmy Carter had just lost the presidency to Ronald Reagan. An evening news report showed Carter at home writing his memoirs on a “word processor,” which if I remember correctly looked like an early Apple computer. Just eight years earlier I had been running punch cards through a transistor–run IBM 360 mainframe computer that took the space of a small room at the University of New Hampshire’s computer center. At that time the future of computers appeared to be in large mainframes like this one, which serviced a host of programmers; computers would be the domain of the highly trained. Who could have imagined that in less than a decade new technology would change the field so dramatically? As I watched Carter typing away on his personal computer, I realized that in eight short years all of my computer training had become obsolete. Yet
the now-antiquated mainframe computer did generate some startling discoveries, possibly the most important being chaos theory.
It was on a Royal McBee, a vacuum tube–run computer in the early 1960s, that Edward Lorenz, a research meteorologist at MIT, inadvertently stumbled upon a finding that shook the very paradigmatic foundations of Western science. Lorenz’s Royal McBee would look like a prehistoric dinosaur next to today’s computers—a huge mass of tubes and wires that rattled loudly while operating. Although more than a hundred times as large as a personal computer, it had thousands of times less “brain power.” Yet it could do something that people couldn’t—it could execute millions of calculations in a relatively short span of hours.
Lorenz had been attracted to weather as a child and followed this interest to one of the premier research institutions in the world. Unlike astronomy—a physical science that could make fairly accurate long-term predictions regarding eclipses or the return of comets—meteorology had progressed little through the twentieth century in terms of accurately projecting the weather just a few days hence. Lorenz hoped to change that. Within his Royal McBee he created a virtual weather system. Through the coupling of twelve equations that related such things as pressure to wind direction or temperature to pressure, he produced a computerized system that mimicked the weather.2
He hoped that he would be able to glean repeated patterns from his virtual system that could be applied to improving real weather forecasting.
During the winter of 1961 he stopped the computer in the midst of one of its runs to double check a weather sequence in his virtual world. He typed in the numbers from his printout at the point where he wanted to restart the run and left
his office to let the McBee rattle away. Upon his return he was shocked to find that the new run, after just a few cycles, had totally diverged from the original run. Because each run started at the same point and followed the same laws as prescribed by his programmed equations, both runs should have been identical.
Lorenz double-checked the number he inputted to start the second run with those on the first printout. It was this comparison that led Lorenz to a starling discovery. For the second run he had entered the number 0.506, a rounded down version of the printout’s number—0.506127. The two numbers differed by only 0.000127—a little more than one ten-thousandth.3
Based on the well-established scientific notion—proximate knowledge of initial conditions
—such a small change shouldn’t have affected the outcome of the run. It was known in the scientific community that absolutely accurate measurements of anything were not possible. But having a close measurement of initial conditions—proximate knowledge—was fine for making future predictions due to convergence
, a situation where minor perturbations in a system tend to cancel each other out, allowing the system to function in predictable ways. Under this paradigm, if you are a little off at the start in your measurements, it only means that you will be a little off at the end.
This concept is borne out in a scene from the movie Apollo 13
. As the disabled space capsule carrying Jim Lovell and crew approaches the Earth, they have to readjust their angle of descent. If the angle too steep they will burn up; if it’s too shallow, they will bounce off the Earth’s atmosphere, never to return. They need to readjust their descent pattern by using thrusters for a prescribed period of time while holding the capsule in a fixed position. Of course the timing of
the thruster use and the holding steady of the capsule couldn’t be accomplished with absolute accuracy. It didn’t matter, though, because if their initial conditions of thrust and position were proximate, it would be enough for a successful descent—one they accomplished.
What Lorenz saw in the second run of his Royal McBee made him realize that not only would long-term accurate weather prediction be unattainable, but more importantly the notion of proximate knowledge of initial conditions was flawed. Slight alterations at the start of a system could indeed dramatically alter its future behavior. This would come to be known as the butterfly effect, after a hypothetical scenario. The beat of a butterfly’s wings in Asia creates minor air movements, initiating a long string of events that cascade through the meteorological system, eventually generating a powerful storm in the United States. As such, systems like Lorenz’s virtual weather weren’t predictable. Lorenz shared his findings with colleagues who, being steeped in more than two centuries of Newtonian physics and believing in the rightful place of proximate knowledge of initial conditions, rejected the finding. Later they would come around to call these systems chaotic. The naming of such systems as chaotic and the study of them as chaos theory show just how embedded in a linear paradigm western science was—a paradigm imbedded in predictability, spawned by the work of Galileo and Descartes, and later codified by Newton’s grand synthesis.
For almost 2,000 years prior to the seventeenth century, Aristotle’s ideas formed the very foundation for Western natural science. At the heart of Aristotle’s work was change. He saw
matter and form as inextricably linked by a dynamic, developmental process of change that he labeled entelechy
—self-completion. The matter of a rotting log is transformed into a fungus, or milkweed leaves consumed by a caterpillar become transformed into a monarch butterfly. For Aristotle, to develop an understanding of the world, it was necessary to comprehend how change was continuously restructuring matter into new forms. Through this approach to studying the natural world, process and pattern became far more important than the material of which something was composed. As we will see, Aristotle’s underlying notions are strikingly similar to modern complex systems science.
In the early seventeenth century a new scientific paradigm emerged from the work of Galileo and later René Descartes. Both men advanced the idea that the world was comprised of matter in motion. The best approach to understanding the nature of matter in motion was to reduce problems to simple terms that could be analyzed and solved through simple mathematical equations. This approach gave rise to reductionism—to understand how something worked, it had to be taken apart, and its parts studied at increasingly smaller levels. Using this approach Galileo was able to predict movements of planets in somewhat accurate ways.
Descartes took reductionism even further. He was intrigued by machines that were prevalent in Europe in the early seventeenth century, such as wind-up clocks and various wind and water mills. These devices had a profound impact on his thinking, causing him to view the natural world as being composed solely of machines. “We see clocks, artificial fountains, mills and other similar machines which, though merely man-made, have nonetheless the power to move by themselves in several different ways . . . I do not recognize any
difference between the machines made by craftsmen and the various bodies that nature alone composes.”4
He knew that a machine could be understood if one knew all the parts and the sequence in which they interacted. In order to know the parts, one needed to take apart the machine—reductionism. Since each part would drive another that would in turn drive yet another part, the machine represented a linear system. A linear system does not mean that the system can’t run in a cyclic fashion like the hands of a clock. But it does mean that the system can’t feed back on itself. In a linear system each part works in a lockstep way with the other parts, so that the system always follows the exact same sequence of interactions between the parts. In this way a linear system is extremely predictable, and as such, controllable. After so many ticks of the gears, the minute and hour hands of a clock will have moved only so far. For Descartes the sum of a machine’s parts simply equaled the whole. What is lost in this paradigmatic view of the world is that the whole may be much more than the sum of its parts. More than any other scientist of his time, René Descartes changed the paradigmatic nature of Western science—from Aristotle’s holistic view to a reductionistic, linear view that focused on the parts rather than the whole. But it was Sir Isaac Newton who codified linear science with his study of mechanics.
I remember my high-school physics class and all sorts of experiments that we conducted to confirm Newton’s laws of mechanics, such as F = MA—force equals mass times acceleration. However, to get close to proving these laws, we had to use air machines to reduce friction as much as possible, or drop high-density items rather than low-density ones. The dropping of baseballs from our third story high school windows worked well to confirm Newton’s laws. But if we
had dropped dried leaves—impacted by wind currents and their own tumbling behavior—the equations wouldn’t have worked at all. The failure of Newtonian mechanics in certain circumstances was never discussed in this class. Instead Newton’s work was always shown to display the hallmark of science—predictability.
Descartes’s and Newton’s approaches to scientific inquiry have proven very powerful in various branches of science over the last four centuries, but their approaches have huge deficiencies when applied to the complex natural systems that surround us—biological, geological, meteorological—as well as human-generated systems, such as an economy. These systems all have parts that can interact with other parts in different ways at different times, allowing these systems to loop, or feed back on themselves. It is this process of feedback that creates a number of profound differences between complex and linear systems.
Because of a complex system’s ability to feed back on itself, it loses quickly the inherent predictability of a linear system. Each morning millions of people in this country commute to work, often driving cars into urban areas. For these commuters it is not possible to predict exactly how long the commute will take. The choked highways generate their own feedback. A driver leaves home in good spirits, but finds he is becoming anxious at the somewhat slower pace of the commute. He becomes more aggressive than usual and cuts into a lane too close to the car behind him. That individual brakes unexpectedly and gets rear-ended by the car behind him. One lane of traffic is stopped and backs up the highway for miles.
In this case the slower commute fed back on itself, creating conditions that further slowed the drive. This is called positive feedback
because it forces the system to keep moving in the direction it started—slow to slower.
The same is true for weather. Last night our weather report predicted two to four inches of snow by morning. It’s now 9 A.M., partially sunny, and no snow has fallen. This morning’s weather report talks only of flurries this afternoon. How could a forecast change that much in just twelve hours? The answer is that weather is a complex system that feeds back on itself. In this instance a Canadian high-pressure system moved faster than expected because of influences from other Northern Hemisphere frontal systems. The high-pressure system deflected the snow-producing, low-pressure system farther to the south. I doubt a butterfly in China is the cause of this change, since it is winter there as well, but here in 2004 Lorenz’s sense that consistently accurate weather prediction would never be possible is holding true.
In Lorenz’s day, complex systems like the flow of commuter traffic and weather were called chaotic because it wasn’t possible to predict what they would be doing at an exact point in time—as opposed to a linear system, which is predictable. Since prediction wasn’t possible, commutes and weather were seen as chaotic, messy things. I agree that particular kinds of weather and commutes can be quite messy, but in fact such systems are not chaotic at all. If we examine the patterns generated by commuters or weather over larger scales of time, like a year or a decade, then the behavior of the system becomes quite conservative and predictable. It’s true that we can’t accurately predict the weather one week from now, but based on many years of data we can confidently assume that January will be the coldest month of the year in Vermont, that
we won’t get low-elevation snows in July, and that November will be our cloudiest month. The patterns consistently repeat themselves. So these systems are not chaotic, it’s just that we can’t predict exactly what they will be doing at any particular point in time. Because of their conservative, long-term behavior, scientists no longer call these systems chaotic or the study of them chaos theory. Nonlinear systems
are now called complex systems and the study of them complex systems science.
Attributes of Complex Systems
Because of feedback, complex systems share a number of attributes not observed in linear systems. These attributes will be instructive in our examination of progress, since all socioeconomic systems are complex. The attributes I will focus on are: emergent properties, self-organization, nestedness, and bifurcation.
Since the parts of a complex system can interact in numerous ways, researchers in this arena quickly realized that it was much more productive to study the interactions between the parts as well as the pattern (or system behavior) that emerged from those interactions—rather than the parts alone. This represents a return to the process and pattern of Aristotle. What was also realized is that the system behavior or pattern was far greater than the sum of a system’s parts. Complex systems generated emergent properties—things that couldn’t be predicted by just examining the parts.
A trip to the savannahs of Kenya brings people into direct contact with one of the most impressive animals on the planet. It’s not the elephant, giraffe, or lion: rather, it’s the African termite. These termites build huge mounds, some up to
more than twenty feet in height and as large as a small house. Big mounds house colonies that number in the tens of millions of termites. By examining individual termites, would it be possible to predict they could create such massive structures or that they can maintain an internal mound temperature that varies by only a few degrees?
Activities within a termite colony are controlled by the queen. The queen is trapped—with her king—in an underground nuptial chamber. Besides controlling the colony’s behavior, the queen’s other role is to produce about 100,000 eggs a day to keep the colony well stocked with workers and warriors. To create this kind of egg production the queen’s body grows to immense size in comparison to her workers. While a worker is the size of a small ant, the queen is as fat as a person’s thumb and about a half-foot long. This leaves the queen unable to move or even care for herself. She has to rely on her workers for everything, including feeding.
Termites communicate via the chemistry of their saliva. Whenever two termites meet, they “kiss” and exchange saliva, transmitting chemical messages. If the mound gets too warm, the chemistry of workers’ saliva changes. As workers meet, the chemical message is passed throughout the mound. Eventually the message is passed to the queen during feeding by a worker. Along with eggs, the queen also produces a continuous, chemical-rich secretion that exudes from pores all along her abdomen. Workers constantly suck up these secretions along with their chemical messages and pass them along through the colony. When the queen picks up the message through her feeding that the mound is getting too hot, the chemistry of her secretions changes. Workers tending the queen pick up the new chemical message and carry it out into the colony. Workers in the colony who receive this
chemical message stop what they are doing and make their way far underground to the water table, where they fill themselves. They then climb back up into the mound and paint its walls with water. Evaporation of the water lowers the mound’s temperature, and when the correct level is reached, chemical messages come back to the queen and her water-gathering message is turned off.5
The mound, the maintenance of its temperature, and the chemical communication pathway are all emergent properties of the termite colony that couldn’t be predicted from examining individual termites. A focus on individual parts—the termites—in a reductionistic, linear approach would completely miss the large-scale behavior of the colony.
In a way the termite colony is a superorganism, with warriors functioning as the immune system, workers as the nervous system and musculature, and the queen as the brain and reproductive organs. Like the termite colony, all organisms are complex systems—something lost to Descartes....