A New Outlook on Nature and Human Agency
It is important to distinguish between the growing relevance of indeterminacy, which is a story dating back at least to the beginning of 20th century, and the changing outlook on indeterminacy, which is a more recent phenomenon. In physics, chemistry, biology, economics, computer science and elsewhere, growing acknowledgement of the import of indeterminacy has for long been complemented with strategies for coping with it, claiming capacities of handling in spite of incomplete characterizations of the state of affairs. Quantum mechanics offers an obvious example. 1 The indeterminacy of matter at the atomic level and the obscurities of the theory itself did not prevent successful predictions and the development of powerful technologies. Another example comes from economics, where John Maynard Keynes (1921) famously made a case for âpersonal probabilitiesâ â that is, subjective risk estimates rather than proper calculations â as triggers of decisions that nonetheless were assumed to remain rational.
1 As is well known, according to Heisenbergâs indeterminacy principle observation and physical state of particles (position and momentum) are not independent. At some point, however, things start to change. Indeterminacy is no longer seen as problem to handle, but rather as a resource. Also this understanding can actually be traced back to the early 20th century. One example is Frank Knightâs (1921) outlook on uncertainty as premised on, rather than an obstacle to, entrepreneurial creativity and profit. However, as Michel Foucault has stressed, change in rationalities and ways of dealing with issues is often more a matter of evolution than revolution; better, it is a matter of intensification of some features, up to a point in which a qualitative shift takes place (see Chapter 2). In our case, whatever its origins, this shift became perceptible some decades ago, gaining growing salience in the following years.
For example ecology, as a discipline, has traditionally built on the assumption that ecosystems tend to balance after perturbations. This view is still relevant, as we have seen in the Introduction. It is implicit in the successful notion of âanthropoceneâ (Crutzen and Stoermer 2000): the idea that the present geological era is marked by the perturbations that human activities are causing to the Earth. However, a major conceptual shift began to take place in the 1970s. The thinking of Eugene Odumâs generation, with its assumptions of order and predictability, has been gradually replaced by âa new ecology of chaosâ (Worster 1990: 8, see also Timmerman 1986), according to which there is no spontaneous tendency to equilibrium in nature: no progressive biomass stabilization; no diversification of species or movement towards greater cohesiveness in plant and animal communities. The idea is that change goes on forever, with no direction or tendency to stability; no cooperation, consistence and holistic organization but rather competition, patchiness, fragmentation, individualistic association. Disturbances or perturbations (wind, fire, rain, pests, predators and so on) are therefore claimed to be intrinsic to ecosystems rather than effects of human action. Hence, contingency, disorder and catastrophic transitions are not against life, but what life depends on. âPopulations rise and populations fall, like stock market prices, auto sales, and hemlines. We live ⌠in a non-equilibrium worldâ (Worster 1990: 11).
Similarly, in chemistry and physics attention has increasingly focussed on âdissipative structuresâ. The concept has been coined by the Chemistry Nobel prize Ilya Prigogine, whose work began in the 1950s but reached a full-fledged development from the 1970s onwards, also through his collaboration with the philosopher Isabelle Stengers (e.g. Prigogine and Stengers 1979). A dissipative structure is a thermodynamically open system that works in a far-from-equilibrium condition and is characterized by the spontaneous formation of anisotropy, that is of dissymmetry and bifurcations, which produce complex, sometimes chaotic, structures. Dissipative systems bring into question entropy as a universal law of nature, and with it the âheath-deathâ destiny of matter. Prigogine and Stengers âreplace Lord Kelvinâs nineteenth-century cosmology of decline ⌠with a biocosmological law of increasing complexityâ (Cooper 2008: 39). The key notion is that of clinamen, Lucretiusâs account of the unpredictable swerve of atoms that leads to the emergence of matter. The opposition to entropy, through spontaneous bifurcations and reorganizations that follow unpredictable paths, is today regarded as an overriding tendency â the rule rather than the exception â of material phenomena of all sorts. Irreversibility and instability replace determinism, which was crucial to previous approaches to physics, from Newton to Einstein. In this account indeterminacy is not a problem but rather a crucial âenablingâ feature. This normative orientation emerges clearly from Prigogineâs considerations about Ludwig Boltzmann and Charles Darwin. Boltzmann, notes Prigogine, drew inspiration from the theory of Darwin: âthe man who defined life as the result of a never-ending process of evolution and thus placed becoming at the centre of our understanding of natureâ (1997: 19). Both Boltzmann and Darwin replaced the study of individuals (particles or organisms) with the study of populations, showing that slight variations over a long period of time produce evolution at a collective level. Yet, while Boltzmann described an evolution towards uniformity and equilibrium, Darwin sought to explain the appearance of new species. âSignificantly, these two theories had very different fortunes. Darwinâs theory of evolution ⌠remains the basis for our understanding of life. ⌠Boltzmannâs interpretation of irreversibility succumbed to its criticsâ (1997: 21).
A further example comes from cybernetics. N. Katherine Hayles (1999) has analysed in detail its history and role in the emergence of the âpost-humanâ theme. The first wave of cybernetics, which Hayles dates from 1945 to 1960 and whose central figures are Norbert Wiener and John von Neumann, takes homeostasis as its crucial notion. The central problem, for machines as well as living organisms, is to ensure control over their operations and integrity in a chaotic environment. The second wave, between 1960 and 1980, builds on the concept of feedback, which introduces a loop between observing and observed systems, hence the notion of reflexivity around which Humberto Maturana and Francisco Varela articulate their influential theory of âautopoietic systemsâ. These are physically open yet informationally closed systems. They react to the environment according to their own patterns and codes, which is actually what constitutes them as systems. The third wave of cybernetics begins in the 1980s and stretches to the present. Hayles identifies it with artificial life. The crucial conceptual shift, here, is from self-organizing to emergent systems. The attempt is to create computer programs that reproduce evolutionary processes, with neural nets emerging from the complex interaction of the simple elements of the system. Again, the contingent, disordered character of a world where the natural and the artificial are increasingly indistinguishable is understood as a powerful resource rather than a troublesome feature that systems have to handle.
Indeterminacy: From Trouble to Resource
In short, a turn â simultaneously descriptive and normative â from order to disorder and from predictability to indeterminacy seems to gain momentum from the 1970s onwards in a plurality of fields. This account regards the biophysical world as characterized by uncertainty and instability, these features representing the basis of its dynamism and liveliness, of the capacity of organic and inorganic matter to change, taking novel shapes and structures. Accordingly, it is not, as previously assumed, by seeking control through closure, regularization and prediction but rather by acknowledging and âsecondingâ or âridingâ the unpredictable, contingent, emergent constitution of things that humans can pursue their goals. This was already evident to the nuclear physicist Alvin Weinberg when, at the beginning of the 1970s, he coined the term âtrans-scienceâ to convey the idea of a science increasingly confronted with âunboundedâ issues, engaged in experiments outside the lab, as in the case of the management of radioactive waste (Weinberg 1972). The relevance of âreal worldâ, or âsocialâ, experiments has been increasing along the years, including bio-nanotechnologies, electromagnetic waves, global warming and the Internet, even if nuclear waste remains a prominent issue (van de Poel 2011).
An effective way to depict this change is what Silvio Funtowicz and Jerry Ravetz described as the shift from a âpuzzle-solvingâ approach to scientific practice (clearly formulated questions, controlled experimental conditions, no immediately relevant value controversies, tractable size of stakes etc.) to âpost-normal scienceâ, as characterized by situations in which âfacts are uncertain, values in dispute, stakes high and decisions urgentâ (1993: 744). From handling âknown unknownsâ, in other words, we increasingly find ourselves dealing with âunknown unknownsâ â things that we do not know we do not know and that we cannot reveal through contained experiments, yet may have crucial impacts on the decisions we take (see also Gross 2010). As Brian Wynne (1992) stressed in a seminal article, indeterminacy and decision-stakes cannot be separated: one is implied in the other. In other words, it is the way we relate with the world, the expectations we have about its inclusion in our plans, that define the border between known, knowable unknown and relevant ignorance.
Funtowicz and Ravetz, Wynne and others draw from the growing salience of this deep or radical uncertainty the need for enhanced carefulness, modesty and restraint in our intermingling with the biophysical world, as well as for decisions that, given their inherently precarious and âsocializedâ character, cannot remain the restricted domain of expert groups, corporate managers and government officials, but should be widely participated. In this sense, their view is consistent with the traditional understanding of indeterminacy as agent-constraining non-determinability. The new outlook, by which indeterminacy corresponds rather to an agent-enhancing non-determination (Pellizzoni 2010), is instead perfectly captured by Nassim Nicholas Talebâs notion of âantifragilityâ.
Fragile systems, according to Taleb, are those who are based on knowledge, control and predictability (that is, calculation of risk). For him the whole organization of society (economy, politics, health, education etc.), has traditionally focussed on suppressing randomness and volatility, becoming fragile or at best provided with some degree of resilience from anticipated events. Yet, outside confined man-made situations (the typical example is the casino), the world does not work according to regularity and prediction. Robust or resilient systems are designed to stand and recover after a shock, the features of which need however to be anticipated. Also robust systems, therefore, sooner or later are caught by surprise in front of âblack swansâ, that is, unpredictable and irregular events of massive consequence. Antifragile systems, then, are those which stand randomness, uncertainty, volatility and errors. They can deal with, and indeed âloveâ, unknown unknowns. They are able to benefit from indeterminacy and disorder. âAntifragility has a singular property of allowing us to deal with the unknown, to do things without understanding them â and to do them wellâ (Taleb 2012: 4). In making his case Taleb borrows significantly from traditional ecological thinking, for example when he stresses that antifragile systems are designed around downsizing, decentralizing, subtracting and simplifying, so as to allow for small mistakes, quick learning and easy reversibility of choices. However, his âaffirmativeâ tone is removed from the prudent, humble approach to human agency typical of this tradition (see e.g. Norgaard 1994), as it appears from the following statement:
By grasping the mechanisms of antifragility we can build a systematic and broad guide to nonpredictive decision making under uncertainty in business, politics, medicine and life in general. ⌠Let me be more aggressive: we are largely better at doing than we are at thinking, thanks to antifragility (2012: 4, emphasis original).
Another significant feature of Talebâs approach is that, whatever the account of indeterminacy in each particular case (probabilistic uncertainty, incomplete characterization of the state of affairs, sheer ignorance, chaos, randomness, singularity etc.), its effects are deemed to be âcompletely equivalentâ (2012: 13). In other words, the difference between the possibility for something to happen and the possibility (according to someone) that something happens loses any relevance. This difference was quite clear in the 18th and 19th centuries (Hacking 1975), yet in the 20th century subjectivist, agent-centred accounts have taken growing relevance (Pellizzoni 2010). Talebâs account, then, can be taken as the eventual outcome of this trend; an outcome by which âuncertainty owing to lack of knowledge is brought down to the same plane as intrinsic uncertainty due to the random nature of the event under considerationâ (Dupuy and Grinbaum 2004: 10); by which, in other words, ontological and epistemic indeterminacy are, for any practical purpose, the same thing.
Talebâs account of antifragility synthesizes the peculiar outlook on nature and human agency that has been spreading in recent years, some examples of which will be analysed in the following sections. Wynneâs argument about the reciprocal implication of indeterminacy and decision-stakes indicates that the biophysical challenges elicited by science and technologyâs advancements and the socio-cultural underpinnings of these advancements impinge on one another. The outcomes that are taking shape in these years require, however, a careful analysis.
Consider the account of human attempts at the mastery of nature provided in the aftermath of World War II and Hiroshima by Max Horkheimer and Theodor W. Adorno. Dialectic of Enlightenment (Horkheimer and Adorno 2002) describes the tragic contradiction between humansâ efforts to find protection from nature and the unknown through knowledge and technical control, and their subjection to these very instruments of emancipation. This contradiction, Horkheimer and Adorno argue, finds a full-fledged expression in modern science and technology, as based on the core dualism of Western metaphysics: mind and world are regarded as ontologically separate, hence the former...