Robert Lucas, Nobel Laureate in Economics, once declared that âin cases of uncertainty, economic reasoning will be of no valueâ (Lucas 1981: 224). It is true that in Lucasâs model economies there is no way of handling uncertainty, in the sense of uncompletable lists of contingencies, causes, and even options; but it is no less true that only in cases of real-world uncertainty does economic analysis have any potential value. If uncertainty is absent, then every problem situation can be fully specified (if necessary with a probability distribution defined over a complete list of possible states of the world) and choice is reduced to a logical operation. This is indeed the world of rational choice models. But then the process of choosing, and the ways in which the process of choosing is organised, are empty topics, as has been observed by Knight (1921: 267-8) and Hutchison (1937). If all optima can be calculated (with due regard, of course, to future possible states of the world), well-motivated economic agents (and to economists like Lucas no other kind is conceivable) will have calculated them already; and economists will be the last to know. In such a world economists can demonstrate only that what has already happened in practice can also happen in theory. Thus not only are there no hundred-dollar bills lying on the street, there are no hundred-dollar bills on offer for economic advice. Economic analysis is of potential value only if people do not already know what to do: the foundation for useful economic theory must be incomplete knowledge, or partial ignorance. In these chapters I wish to suggest how useful economic theory can indeed be built on this foundation.
Obstacles to knowledge
The basic issue here is epistemological. What can we know? In this section we will briefly review six reasons why human knowledge is necessarily incomplete, before going on to consider institutions and evolution as ways of mitigating these problems, and of responding to the opportunities which are the obverse of the difficulties that they create. These six obstacles to complete knowledge may be summarised as the insufficiency of induction, complexity, the limits of human cognition, exogenous change, the interdependence of individual initiatives, and conflicting ideas and purposes. Because the intention in this book is to focus on a particular set of economic consequences of uncertainty (a set which, it should at once be made clear, gives no emphasis to macroeconomic issues, where the neglect of uncertainty may lead to substantial error), the sources of uncertainty will be stated and not explored; but in order to understand the implications of uncertainty it is necessary to explain why â[e]rror is inseparable from all human knowledgeâ (Menger 1976 [1871]: 148).
The first and fundamental obstacle is David Humeâs problem: the impossibility of certain, or justified, knowledge of universal laws, other than the knowledge of logical relationships. The instances that we observe, even when supplemented by the reported observations of others - which, of course, are not necessarily reliable â cannot be more than a tiny fraction of all possible instances, and they crucially and necessarily exclude all observations from the future; to treat the observations that are available as a representative sample we have to assume that they are drawn from a population which displays precisely those characteristics that we deduce from that sample. Hume reminds us that the existence of such a population can never be proved; all expectations, however carefully formulated, are conjectures which go beyond the evidence. Indeed, since knowledge of unique events is interpreted by the application of general laws, which no amount of attainable evidence can ever prove to be true, even âthe knowledge of the particular circumstances of time and placeâ, to which Hayek (1945: 521) rightly attached such importance, can be disputed, as we observe often enough, and as Hayek (ibid.: 524) himself explicitly recognised.
There is a great diversity of views about the degree and even kinds of knowledge that are attainable, which I shall not discuss in this book. The view taken here is that all knowledge should be considered as conjectural, in Karl Popperâs (1963) sense of hypotheses which, though apparently corroborated, always remain open to refutation, but that it is nevertheless possible in some circumstances to attain knowledge which is highly reliable, as John Ziman (1978) has argued with care, insight and lucidity. As Hayek became increasingly aware, the processes by which reliable knowledge may be obtained is a central issue in economic organisation; and Popperâs transformation of Humeâs problem of induction into a theory of scientific progress which liberated scientistsâ imagination (Medawar 1984) may be generalised into a theory of economic development, in which the economic system is not only an allocation mechanism but a prime source of novelty.
In both science and the economy, it is precisely because our present knowledge is incomplete, and some of it wrong, that we have the hope of improvement. Uncertainty is a source, not only of threats, but also of opportunities, even for economic theorists. The possibility of economic and social progress through the growth of knowledge inspired Marshallâs work; it pervades the Principles, though its presence may not be recognised by anyone obsessed with allocative efficiency. I believe - this, of course, is a conjecture â that, in the words of George Kelly (1963: 6), âthe universe is really existing and that man is gradually coming to understand itâ; I also believe â this is a fundamental value judgement â that we should try to improve our understanding of this âreally existingâ universe, and therefore that we should never be content with a record of apparently successful prediction from an instrumentalist theory. We may not, for the moment, be able to do any better; but we should always retain the objective of explaining why the predictions of a particular instrumental theory are successful. Only then do we have any chance of recognising when they may be expected to fail and how the theory might be improved.
Even when it is based on intendedly realist theories, human knowledge is always fallible, as Ziman explains, and knowledge within economic systems is liable to be especially so. Therefore the ability to cope with both incomplete and fallible knowledge, to take appropriate precautions against omissions and error, and to discover errors and generate novel conjectures â most of which will inevitably be false â is a crucial requirement in the organisation of an economy or a polity, as it is in every kind of scientific inquiry. This crucial requirement is inadequately represented in the great bulk of economic theory. In order to minimise confusion, the term âlearningâ, which is often used, especially by economists, in the sense of the acquisition of information from a prespecified set, or convergence on âthe correct modelâ, will not be prominent in this book. Such a concept of learning may be entirely appropriate within specific limits of time and space, so defined as to allow one to identify a best-corroborated theory, a best available data set, or current best practice; but a good deal more is required for an understanding, let alone an evaluation, of economic systems. It is certainly insufficient for the generation of knowledge.
The inherent fallibility of human knowledge is compounded by the complexity of the universe, which is the second obstacle. The development of our understanding of this universe is hampered by the apparent fact that it âfunctions as a single unit with all its imaginable parts having an exact relationship to each otherâ (Kelly 1963: 6). The failure of novel conjectures about products or processes is often attributable to false implicit assumptions about the structure of the physical, biological, economic and social systems into which they are to be introduced. This âorganised complexityâ (Hayek 1978: 26-7) multiplies the difficulties of collecting relevant evidence, and, more fundamentally, of knowing what evidence is relevant. It is also the source of what is commonly called the Duhem-Quine problem: not only can we not prove any general proposition to be true; we cannot even prove it to be false, because we can test only complex conjectures, which embody many supplementary propositions that can never be proved true, and therefore we can never identify for certain precisely what element within that complex is responsible for a falsifying instance. Thus even the negative knowledge that is gained by falsification does not qualify as âjustified knowledgeâ, and positive knowledge must be less secure.
As Hayek warned us, the complexities of human societies threaten the applicability of closed models of social science; but the complexities of the natural world also threaten the reliability of natural science, as Ziman (1978) is well aware. The foundations of knowledge are problematic. However tightly we control our experiments, our controls rest on principles of isolation which are necessarily fallible, and which are sometimes falsified; the well-established inertness of fluorine compounds, which had been put to beneficial uses - for instance as a safe refrigerant and anaesthetic - was eventually found to be inoperative in the ozone layer. As this example illustrates, important relationships may become apparent only after lengthy intervals; in Kellys (1963: 6) words, âtime provides the ultimate bond in all relationshipsâ.
Alfred Marshallâs work as an economist was dominated by this problem of complexity, and by the interaction between complexity and time. He pointed out that âa theoretically perfect long periodâŚwhen carried to its logical consequences, will be found to involve the supposition of a stationary state of industry, in which the requirements of a future age can be anticipated an indefinite time beforehandâ (Marshall 1920: 379) and warned against misleading closure; it seems most unlikely that he would have regarded intertemporal general equilibrium as an adequate response to this problem. His biographer Peter Groenewegen (1995) has insisted that Marshall intended every theoretical proposition in what was originally published as Volume 1 of his Principles to be treated as provisional; the qualifications and revisions that were necessary for a full understanding of the economy and for the design of sensible policies were to be developed in Volume 2. But Marshall was so obsessed with the interconnectedness of everything, and with his Principle of Continuity which allowed for no clear distinction between the significant and the insignificant, that he could find no acceptable boundaries within which to develop his analysis. We may regret his caution, but after observing the damage that has been caused by later economists who scorned the clear warnings which Marshall had included in Volume 1, we can also respect it.
The third source of our imperfect knowledge, which interacts strongly with the second, is our limited mental capacity. This we can justifiably call Herbert Simons problem, even though Simonâs concept of bounded rationality is itself a boundedly rational definition of our cognitive limitations, implying that we choose by making logical deductions from a truncated information set, and diverting attention from those cognitive powers (for example, our ability to construct, recall, and apply complex patterns, to which Ziman (1978) gives particular attention) that do not seem to rely on what economists, in particular, regard as rational processes. Conliskâs (1996) review of the evidence on human rationality and of economistsâ responses, though admirable within its declared scope, does not consider the full range of cognitive issues. There is much more to intelligent behaviour than procedures which are even limitedly rational, as we shall see; indeed âthe human mindâŚmay often be better than rationalâ (Cosmides and Tooby 1994b: 329). But even when striving to be boundedly rational, in Simonâs sense, we often find that, as well as being presented with more sensory input than we can handle, we have no means of access to information that would be necessary for an adequate understanding of our situation. Both deficiencies compel us to adopt simplified representations and simplified procedures. These cannot be optimally chosen, and often rely on linkages which cannot be classed as logical; they are conjectures, which go far beyond what we can possibly know. How these simplifying conjectures come to be adopted is a major theme of subsequent chapters.
Perhaps none of this would matter very much if we lived in an unchanging universe; for â[c]hange in some sense is a condition of the existence of any problem whatever in connection with life or conductâ (Knight 1921: 313). In such an unchanging universe, where the weather may be inconstant but the climate does not vary, âthings have time to hammer logic into menâ (Schumpeter 1934: 80); routine behaviour is all that is required. In such an environment we, like other species, would not need to know why any routine works, or even to be conscious of the routines that we follow, although as human observers of such behaviour we might be more interested than Schumpeter was in the processes by which routines become established. (It is tempting to speculate whether such an environment would be hospitable to human consciousness, or even to the human species.) However, as Schumpeter also declared, the introduction of change is âa distinct process which stands in need of special explanationâ; it must therefore be included as the fourth source of our imperfect knowledge. The effects of change are not well represented by imposing shocks on a system within which knowledge is supposedly complete (except for the crucial knowledge that unpredictable shocks are to be expected), for, as Schumpeter pointed out, intelligent response to exogenous change requires something more than rational choice as that is currently interpreted. Frank Knight (1921: 313) identified the paradox of rational expectations long before the concept entered economic analysis: our ability to predict the future depends on its similarity to the past, but our need to predict the future results from our belief that it will be different from the past, in ways that are excluded from the definition of rational expectations.
As that experienced manager Chester Barnard (1938: 305) tells us, â[m]uch of the error of historians, economists, and of all of us in daily affairs arises from imputing logical reasoning to men who could not or cannot base their actions on reasonâ. Non-logical processes are essential to scientific discovery (ibid.: 306), and many decisions relate to unique events, in which causality is difficult to establish (ibid.: 307): in these circumstances the attempt to apply rigorous reasoning
indicates a lack of balance of mental processes;âŚif there is no basis for calculation, it is more intelligent to guess than to manufacture data for false calculationâŚ.the correctness of such decisions must, therefore, depend upon the effectiveness of the mental processes of the type that can handle contingencies, uncertainties, and unknowables.
(ibid.: 308, 311, 312)
As Barnard emphasises, the mental processes on which we must rely when economic analysis, according to Lucas, is of no value, are far more orderly, indeed intelligent, than âguessworkâ or âhunchesâ, for what is not rational may also be very far from being irrational; indeed, it may be much less irrational than a pretence of rationality which ignores significant elements of the situation, such as unpredictability. We shall see in Chapter 3 that human cognition seems to be remarkably effective in non-logical processes, though this may seem less remarkable when we reflect that without such abilities the conjectures that are embodied in our species would have been conclusively falsified by the processes of biological evolution.
Non-logical processes are even more important when, as in the telephone business in which Barnard worked, the purpose of many decisions is not to respond to events but to introduce change. It is a characteristic of modern societies that people wish the future to be in some ways different from the past: we therefore require knowledge not only to understand and adapt to what exists, and to the changes in what exists, but in order to create change which will be acceptable to others. This is not a problem in the development of the natural sciences, since it is universally assumed that fundamental particles, molecules, or genes are not purposeful; but it can be a problem for technologists who are trying to adapt scientific knowledge to particular ends. (That is one reason why the relationship between science and technology is not at all straightforward; another reason will receive some attention in Chapter 4.) This desire for purposeful change generates two kinds of systematic interdependencies, with their associated problems (the fifth and sixth) of knowledge.
If there is no disagreement about the kind of change that is desired, then the problem is that of co-ordinating activities: how can we know what other people are intending to do, and how can we bring the relevant pieces of individual knowledge into alignment? In orthodox theory, a fully-aligned system is represented by an equilibrium set of contracts, of which many variants are to be found in the journals; but the process of achieving this alignment appears as the problem of equilibration, which has now effectively been abandoned as insoluble, and is therefore routinely assumed to have been solved. We may give Hayek (1937) the primary credit for posing the issue in a manageable form, and George Richardson (1960, 1990) the primary credit for investigating possible solutions, and for demonstrating in the process that perfect competition is quite inappropriate as a way of even thinking about it. In modern macroeconomics, the effect of assuming that expectations are rational and that all markets clear is that the co-ordination problem cannot even be defined (Leijonhufvud 1998; RĂźhl and Laidler 1998).
The other kind of difficulty arises when there are conflicting purposes, which may range from straightforward rivalry between similar businesses, through the irruption of Schumpeterian entrepreneurship, to contradictory notions about the appropriate ways of arranging human affairs. Within standard economics, this sixth problem has plagued the analysis of oligopoly, which Simon (1982, vol. 2: 435) once called âthe permanent and ineradicable scandal of economic theoryâ, and is currently manifest in the proliferation of solution concepts in game theory. This proliferation, and the variety of supplementary assumptions that have been introduced to generate solutions, has inspired some of the leading practitioners to widen their definitions of game-theoretic problems beyond the standard framework of highly rational and strictly self-interested agents to include elements such as bounded rationality, conventions, and even trust, in ways which may be consistent with some of the arguments of this book.