A Complexity Theory for Public Policy
eBook - ePub

A Complexity Theory for Public Policy

Göktuğ Morçöl

Share book
  1. 308 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

A Complexity Theory for Public Policy

Göktuğ Morçöl

Book details
Book preview
Table of contents
Citations

About This Book

Complexity theory has become popular in the natural and social sciences over the last few decades as a result of the advancements in our understanding of the complexities in natural and social phenomena. Concepts and methods of complexity theory have been applied by scholars of public affairs in North America and Europe, but a comprehensive framework for these applications is lacking. A Complexity Theory for Public Policy proposes a conceptual synthesis and sets a foundation for future developments and applications.

In this book, Göktu? Morçöl convincingly makes the case that complexity theory can help us understand better the self-organizational, emergent, and co-evolutionary characteristics of complex policy systems. In doing so, he discuss the epistemological implications of complexity theory and the methods complexity researchers use, and those methods they could use. As the complexity studies spread more around the world in the coming decades, the contents of this book will become appealing to larger audiences, particularly to scholars and graduate students in public affairs. The unique combination of synthesis and explanation of concepts and methods found in this book will serve as reference frames for future works.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is A Complexity Theory for Public Policy an online PDF/ePUB?
Yes, you can access A Complexity Theory for Public Policy by Göktuğ Morçöl in PDF and/or ePUB format, as well as other popular books in Politik & Internationale Beziehungen & Politik. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2013
ISBN
9781136283468

Part I
Concepts

1 Fundamental Concepts
of Complexity Theory

The purpose of this chapter is to lay the groundwork for the conceptual elaborations and discussions in the following chapters. I discuss the key concepts of complexity and complex systems in this chapter. I begin with a clarification of the concept complexity: Is it in large numbers or in the nature of relationships? Is it in the nature of things or in our knowledge of things? Nonlinearity is the most fundamental concept in complexity theory; it is considered the primary generator of complexity. This term does not signify merely a lack of linearity, but it can be defined positively. I address the theoretical issues in the commonly concepts of “complex systems ” “complicated systems ” and “simple systems.” My argument is that the distinctions made among them are not sustainable. I also discuss the concepts of “complex systems” and “complex adaptive systems” and explain my preference for the former.

WHAT IS COMPLEXITY?

What do we mean by “complexity”? Is it in the nature of things? Or is it a function of the way human beings know the things in their worlds? In other words, is it ontological or epistemological or both? Do we call a situation (e.g., a policy problem) complex because that is the way policy problems are? Is it complex because perhaps it is beyond the comprehension of our cognitive capabilities? Alternatively, is there a mismatch between the way our minds work and the way things are?1
In my earlier discussions on the social constructionist views of public policy, I mentioned that those theorists attribute the complexity of public policies to social construction processes. If the policy process is complex, this is because of the myriad of ways human beings construe realities around them. This is not necessarily incorrect, but it reflects only a partial understanding of complexity. Complexity theorists suggest that policy processes are complex not only because policies are social constructions but also because the natural processes that public policies interact with are also complex.
Take the global warming issue as an example. It is complex, not only because there are many interpretations of it, which are related to individuals' and groups' perceptions of their self interests (such as, automobile manufacturers' and oil companies' interests in keeping up oil consumption) and dominant value systems in advanced industrialized countries (such as, the attachment of Americans to their cars, not only as vehicles of transportation, but also as symbols of a lifestyle). It is complex also because the natural processes that generate the global warming (atmospheric conditions, interactions between the levels of greenhouse gases in the atmosphere with temperatures, etc.) are complex. Therefore, a good understanding of natural complexities should be part of understanding the complexities of public policies.
Complexity theory can help us define complexity in a new way. Before I describe that new way, however, I should note, once again, that there is no unified complexity theory, yet. Complexity theorists do not even offer a definition of complexity that they agree on (see Rescher, 1998, pp. 2–3; M. Mitchell, 2009, pp. 96–111). What I propose in the following sections is my understanding of complexity, a synthesis of these definitions and conceptualizations.
Complexity is usually associated with large numbers. Although this is not incorrect, complexity theory shows that complexity is not always a product of large numbers. It is primarily a product of nonlinear relations. It is also an emergent property and a product of coevolutionary processes. All these (nonlinearity, emergent properties, and coevolution) are characteristics of all complex systems, natural or social. In social systems there is also the complexity that is a product of social construction processes.

Complexity in Large Numbers

It makes intuitive sense to define complexity in terms of large numbers: the higher the number of elements, the higher the degree of complexity. This commonsense numerical definition of complexity has some validity, but it is not always correct. Large numbers of uniformity do not generate complexity. If one has a large number of boxes of the same shape and size, for example, stacking them up is not a complex task. It may be a hard, backbreaking job but not a complex one. One can easily define how this job needs to be done and repeat the procedure as many times as needed to complete the job.
Sharkansky (2002, p. 1) applies this numerical definition of the complexity to policymaking. He says that policymaking is complex, because there are numerous governmental units and non-governmental organizations that are involved in it. For instance, in the US there are about 90,000 governmental units. Boris (1999, p. 6) notes that there are also about 1.5 million nonprofit organizations with a variety of public service delivery functions. Of course, there are also tens of thousands of for-profit organizations that may be involved in policy processes, one way or the other. It makes intuitive sense that with so many units involved, the policy process in the US must be very complex. Because of these high numbers, it is not possible to comprehend the policy processes in the US in their entirety, one might argue.
However, if they all are organizations operating under a common rule, the description of their interactions will be simple. It is not merely the number of units/elements in a system, or the number of the types of those units/ elements, but also the number of the types of interactions among them that make a system simple or complex. A policy system is complex if there are multiple kinds of interactions among its elements/units. This could happen when there are different rules governing different kinds of organizations. For example, for-profit organizations are different from public organizations in that the former function is to maximize profits, but the latter are formed to serve the interests of their respective publics. Then the question is this: Can we reduce the number of the rules governing these different kinds of organizations? To the extent that they are subjected to the same rules, and to the extent that they actually operate according to these rules, we can simplify the system. This can be done only partially. All public, nonprofit, and for-profit organizations can be subjected to the same laws of non-discrimination among its employees, such as the Americans with Disabilities Act rules. But this is only one small aspect of the rules under which they operate. The US Environmental Protection Agency and a hedge fund firm operate under very different formal and informal rules, for example.
To further stress that there is no proportionate (linear) relationship between large numbers and complexity, I want to point to the example of sequencing the human genome. In his New York Times story on the first ten years of the Genome Project, Wade (2010) notes that scientists were surprised to find out that the number of genes in humans are not much larger than the number of genes in species that are much less complex. For instance roundworms, which are at a very low level of biological evolution and much less complex than human beings, have 20,000 genes that make proteins. Humans have 21,000 genes. So there is only a 5% difference between the number of genes these two species have. But the difference between the complexities of the two species is much more than this 5% obviously. Humans are far more complex than roundworms. Scientists note that it is not only the numbers of genes, but the ways these genes are connected to each other and the ways they are regulated that make the difference in the degrees of their complexity (see Szathmàry, Jordán, and Pàl, 2001).

Complexity Theory and Complexity Reduction

As I argued in the introduction to this book, we, human beings, have the propensity to simplify; that is the way our cognitive system works. Complexity theorists are not immune to this human propensity; they too simplify, as I will show with multiple examples in the following chapters. But they do not agree with the Occam's razor principle; they do not think that the simplest explanation is always and necessarily the best explanation.
There is no commonly accepted definition of complexity in complexity theory, as I mentioned earlier. M. Mitchell (2009) identifies nine different definitions articulated by complexity theorists (pp. 96–111). She points out that there is a commonality among six of these definitions: They define complexity and simplicity in terms of the nature of information content. Complexity is defined as “entropy” (the degree to which a message is orderly), “algorithmic information content” (number of steps it takes to describe a system), “logical depth” (measure of how difficult to reconstruct an object), “thermodynamic depth” (amount of information required to reconstruct an object fully), “statistical complexity” (“minimum amount of information about the past behavior of a system that is needed to optimally predict the statistical behavior of the system in the future”), and “fractal dimension” (to the extent that an object can be reconstructed in fractal dimensions, rather than Euclidian discrete dimensions).2 One of the definitions Mitchell identifies is about the “computational capacity” of the receiver of the information (e.g., a human brain); so it is also information related.3
Mitchell's observation that most definitions of complexity are information related has important implications. If complexity is in the information content, then it involves both the “sender” of the information and its “receiver.” In other words, complexity is in both the nature of the reality that is “sending” the information and the receiver that receives and interprets it. This receiver may be a human being, a group of human beings, an animal, a plant, or a computer. Then the respective natures of both the sender and receiver determine to what extent the information is complex. In other words, complexity is partly in the eye of the beholder. I will discuss the general episte-mological implications of this in Chapter 6. Here I want to address a specific implication: The complexity of a system can be defined in degrees, depending on the information-processing capacities and modes of the receiving system. I will intentionally ignore the problem of the nature of the receiving system here and postpone a discussion on it until Chapter 6.
Let's take the definition that complexity is algorithmic information content, otherwise known as the “algorithmic complexity” or “computational complexity” approach in defining complexity. In this approach, the complexity of an object, or a system, is defined in terms of how long, or how many steps, it would take to carry out a computation to describe the object, or the system, fully (Casti, 1994; Gell-Mann, 1995; Dooley and Van de Ven, 1999). In Casti's words, in this approach, “complexity is directly proportional to the length of the shortest possible description of [an] object” (p. 9). If we can find an algorithm, a rule, to simplify counting, or describing, the units in a system (rather than counting, or describing, them one by one), we can reduce the description of the system. The shorter the algorithm, the simpler the description of the system. In this definition, the complexity of a system varies from “maximal complexity” to “orderly behavioral regimes”; in between the two extremes one can find different degrees of complexity (Dooley and Van de Ven).
In the case of maximal complexity the description of an object, or the pattern of its behaviors, is the object or pattern itself: A maximally complex object or pattern is “incompressible ” or “irreducible.” Orderly behavioral regimes, on the other hand, are completely describable with simple rules, formulas: They are “compressible ” or “reducible.” Linear mathematical models are well suited for this purpose. They can be used to reduce the complexity of the relations between elements and describe them in simple formulas. All the other forms in between these two extremes are partly describable (compressible, reducible) at varying degrees. It takes higherorder descriptions, nonlinear equations and multidimensional attractors to describe them.
Then, the question would be to what extent can we simplify when describing a policy system? In other words, to what extent can we simplify the counting, or categorization, of its elements and/or their relationships with a formula, and algorithm, or a verbal description? If the relationship among the elements of a system cannot be defined with a common algorithm, then a simple rule cannot describe the system. If a system is perfectly orderly, this means that there is a simple rule that can explain all properties of the system. In other words, the description of the system is completely compressible. Einstein's formula of E = mc2 is an example of this.
But Einstein's formula describes a physical system, which is simpler than a social system. Now consider the earlier example I gave for multiple types of organizations operating in the policy processes in the US. To the extent that they operate according to the same set of rules, we can simplify the description of the policy system. Consider the hypothetical possibility that each of the 90,000 governmental units, 1.5 million nonprofits, and millions of for-profits is entirely unique and each one relates to others in a unique manner. Then there would be no rule that could help us describe the entire system. If each unit has a unique set of characteristics and relates to each other in a unique manner, then there will be as many descriptions as there are units and their relations. This would be an example of maximal complexity. Alternatively, consider the hypothetical possibility that there is only one rule that can describe it all. Everything we may possibly want to know about all governmental units, nonprofits, and for-profits is described with that one rule. For instance, “They are all organizations, and all organizations have the exact same characteristics. All organizations relate to one another in a single manner.” This would be an example of ultimate simplicity.
The fundamental assumptions, the ideals, of the classical management theorists of the early 20th century were very close this notion of ultimate simplicity. They aimed to reduce the complexities of organizational management by defining it in terms of a few principles. Luther Gulick's and Lyndall Urwick's (1937) POSDCORB model and Henri Fayol's (1963) fourteen “principles of management” were both aimed at finding a limited number of rules and abstractions that were supposed to define all organizations everywhere and at all times. Similarly, Frederick Taylor's (1947) “scientific management” was supposed to define a universal science of management that would reduce the complexity of managing work into a few simple rules.
The “human relations” revolution of the 1930s and the subsequent psychological and social psychological theories of organization suggested that there were actually many types of organizations and that the effectiveness of managerial methods would depend on the particular type of the organization under consideration. The contingency theories were good examples of this (Morgan, 1997, pp. 44–50). By pointing out that there were actually several categories of organizations, these theorists increased the complexity of our understanding of organizations. They also stressed that some commonalities could be found among organizations. In other words, the knowledge of organizations was “compressible” to a degree, in their view, but not to the degree that classical management and scientific management theorists had suggested earlier.
Richardson (2010) brings up an important issue in regard to the notion of the compressibility of information about a system's components. Richardson argues that it is the description of the behavior of a system, not that of its components, that determines the degree of its complexity. Richardson points out that “complex systems” are complex because they are incompressible “in behavioral terms but not necessarily in compositional terms” [emphasis added] (p. 21). In other words, we may have the complete knowledge of the components of a system and their composition, but even then we cannot predict the future behavior of the system “without running the system itself” (p. 21). No “algorithmic shortcut” is available for a complete description of the future of the system.
An important issue in the “complexity as algorithmic information content” approach is whether or not maximal complexity means randomness. In Casti's (1996) and Dooley and Van de Ven's (1999) views, these concepts are one and the same thing: If a system is incompressible, or irreducible, it is random (i.e., complex). Richardson (2010) agrees that incompressibility is closely related to randomness, but makes a differentiation between the complexity of a system and randomness. He notes
… whether a complex system is random or not depends strongly on one's tolerance for noise. If one demands complete understanding, then the system of interest needs t...

Table of contents