INTRODUCTION
Large-scale models are a product of our times and seem to be growing in the importance of their use in government policy work. Policymakers are using large-scale models, having no alternative due to the complexity of potential impacts from policy decisions, but at the same time they are skeptical of the reliability of forecasts and calculations from models. There is also great acceptance by the populace at large of results coming from computer analyses which in their minds seem to represent the epitome of technological solutions to problems. Analyses are somehow seen as more believable if they are based on computer technology.
A recent article in Fortune characterizes the importance of this relatively new modeling phenomena in policy work.1
âWhen the history of economic policymaking in the turbulent late 1970s is written, an important part of this story will be about the widening impact of econometric models on federal policy decisions. The wondrous computerized modelsâand the economists who run themâhave become new rages on the Washington scene. These days it seems every spending and tax bill is played into mathematical simulations inside the computer. The model managers themselves are star witnesses before congressional committees whose members seek to define the future. And what these machines and their operators have to say has come to have a significant bearing on what Washington decides to doâŚ.
On more than one occasion the models have contradicted pronouncements by senior government figures. A small model at the Council of Economic Advisors, for instance, indicated that James Schlesinger, the Secretary of Energy, was talking through his hat last March when he predicted that disastrous consequences would flow from last winterâs coal strike.â
The examples go on and on of the recent use of large-scale econometric models in the debate over national economic policy. The most prominent among these large-scale models used in economic policy planning by government include DRI, Chase and Wharton modeling efforts.
Outside of the use of econometric models in general economic planning by government there has been a major surge of new computer modeling for purposes of energy policy planning and analysis. Some of the better known of these modeling efforts include the Department of Energyâs (DOE) Project Independence Evaluation System (PIES), now known as Midterm Energy Forecasting System (MEFS), the Brookhaven Energy Reference System, the Stanford University PILOT modeling effort, the Baughman-Joskow Regional Electricity Model, and many others.
Greenberger, et al, in a recent book2 have classified the types of models by five categories related to disciplines which are listed as (1) linear economics, (2) operationâs research, (3) statistical economics, (4) urban and regional development, and (5) engineering. The nine specific methodologies listed within these disciplines include (1) input/output analysis, (2) linear programming, (3) two-person zero sum games, (4) probabilistic methods, (5) algebraic methods, (6) econometric modeling, (7) microanalysis, (8) land use analysis, and (9) systems dynamics. The disciplines of economics, operations research, statistics, urban and regional planning, and engineering are heavily involved in these quantitative approaches to modeling systems of the real world.
In energy policy analysis these models have been used most recently for projecting the world crude oil situation, the impact of U.S. Government conservation and production incentives on the U.S. economy, in producing annual outlook reports for the U.S. energy situation, and most heatedly in the national debate over natural gas pricing during the discussion of President Carterâs National Energy Plan (NEP).
The above quotations simply serve to point out the fact that large-scale models are very prominent in government policy planning work and are being used extensivelyâlarge-scale computer models are part of the professional and political life of today.
The current dilemma of policymakers in the use of large-scale models is further exemplified by a recent interview with Secretary Schlesinger:3
Q: The original NEP would have saved how many barrels of imports?
A: We estimated 4.5 million b/d.
Q: So the level would be about 7 million b/d in 1985?
A: Right.
Q: And this yearâs legislation would save how much?
A: About 2.5 to 3 million b/d.
Q: So the level of imports in 1985 would be 9 or 10 million b/d?
A: Yes. Our estimate is in that range. Those models (which are used in forecasting) arenât worth very much.
Q: Iâm glad to hear you say thatâafter the bill is passed you admit it, is that it?
A: I stated it rather forcefully from the first. Matter of fact, my troops had to force me to include these estimates from the PIES model. What we are trying to do is to change behavior, change behavioral reactions. Yet we get our estimates of the future from a model that draws on parameters of past behavior. Itâs just logically inconsistent.
Q: A model cannot predict the futureâit can only estimate based on history?
A: Another thing wrong is that we should never have had only point estimatesâwe should have had a range. The volume of oil imports is contingent on such factors as nuclear power plant start-ups, the amount of natural gas produced or coal used, etc. I find it exceedingly difficult to predict how much natural gas we will produce in 1985. The resource base there is much more substantial than the oil base. How much really depends partly on whether intrastate producers sulk, believe in the âregulatory nightmareâ myth, or whether they will just get cracking.
Q: Do you think the resource base on gas is still pretty large? Potentially undiscovered? You feel better now than a year ago?
A: Just look at the results. Iâm not sure the numbers are different. I feel a little better about gas.
Q: You think the higher price of gas will encourage more exploration? More development?
A: Weâre estimating production of 2 trillion cubic feet more gas by 1985 in the lower 48 states under this legislation, compared with the status quo.â
The Senate Energy Committee recently asked Dr. Herman T. Franssen, formerly with the Congressional Research Service, to look into the subject of forecasts of energy supply and demand, both national and international. Dr. Franssenâs main conclusion is contained in the summary of his report.4
The main reason for inaccuracies is that forecasts are influenced by the Zeitgeistâthe spirit of the times. When the spirit is optimistic, the energy forecasts are optimistic. When all is gloom and doom, then everyone says we are running out of energy. Mathematical models, which are supposed to screen out such subjective influences, have been even less accurate than the subjective forecasts. As for the conflicting forecasts being made now with an eye towards 1990, one is no more likely to be accurate than another. Due to physical and human phenomena subject to surprise and frequent change, it is understandable that different projections of energy demand, supply, and prices can be believable at the same time. Finally, a lot depends on who hires the forecaster. The policymaker is no doubt aware that energy forecasters work for different clients, and that, for example, oil companies plan for future opportunities while governments have to plan for national supply security.
Seen from a slightly different perspective, Greenberger, et al, speak of the difficulty of modeling systems which include sociological behavior of people.5
Theory as a basis for a formal model is of greatest value when it refers to a reference system that changes only slowly. Unfortunately, policy areas are among the most volatile fields of application for models. Economist Robert M. Solow of MIT puts it this way: âOne advantage the physicist has over the economist is that the velocity of light has not changed over the past thousand years, while what was in the 1950s and 1960s a good wage and price equation is no longer so.â
Theories that change only slowly over time may not keep up with policy areas that change rapidly and consistently. Recognizing this shortcoming in theory a modeler may personally adjust the projections of his model to reflect an impending strike, war, bankruptcy or other events and trends not covered by the theory. In a model of the national economy âadd factorsâ may be applied to the modelerâs projections to incorporate the modelerâs judgment and intuition about economic trends. The practice is widespread. Many well-known econometric models use some form of judgmental adjustment. Louis Mumford has observed recently that computer models can acquire a God-like authority and the results can be taken as gospel. A model masquerading as an oracle may be nothing more than an advocate in technological guise.
However, policymakers must use large-scale models despite a scepticism of their predictive reliability, because they have no alternative way to handle these complexities. Secretary Schlesingerâs interview exhibits this dilemma on the part of policymakers.
The major task that lies before us is to improve the usefulness of models and the judgments of professionals involved in public policy analysis. The central issue is the procedures by which the reliability of large-scale models, especially those used in public policy work, can be established and made transparentâto distinguish between the influence of professional, subjective judgments and the influence of information that is reproducible by others. The Texas National Energy Modeling Project (TNEMP) has made some contribution to the goal of increased model credibility by transferring and operating MEFS.