
- 348 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Disaggregation in Econometric Modelling (Routledge Revivals)
About this book
In this book, first published in 1990, leading theorists and applied economists address themselves to the key questions of aggregation. The issues are covered both theoretically and in wide-ranging applications. Of particular intrest is the optimal aggregation of trade data, the need for micro-modelling when imoprtant non-linearities are present (for example, tax exhaustion in modelling company behaviour) and the use of a micro-model to stimulate labour supply behaviour in a macro-model of the Netherlands.
Tools to learn more effectively

Saving Books

Keyword Search

Annotating Text

Listen to it instead
Information
Chapter one
Disaggregation in econometric modelling — an introduction
1.1 The aggregation problem
In much economic theory and most applied work in economic modelling, there is a problem of aggregation. Economic theories generally focus on the behaviour of individuals (consumers or entrepreneurs) or groups such as owners of capital and labour, while empirical economic research is concerned primarily with the relationships between groups. Nearly every study must aggregate over time, over individual people, firms, or agents, over products and techniques, and over space, usually over most of these dimensions. Take the analysis of consumer expenditure as an example of a widely researched area in applied econometrics. Most studies use national expenditure data derived in part from household surveys. The analysis, if time-series, invariably involves temporal aggregation over months, quarters, or years; it will cover large numbers of decision-making units in the form of households and individuals; even if disaggregated, the product groups may cover many thousands of goods (defined as items of value, two equal quantities of which are completely equivalent as regards all characteristics, including location, for each seller and buyer); and the analysis will usually assume that all spending is located at one point in space. Similar considerations also apply to the analysis of firms, and even public institutions.
Despite the pervasiveness of aggregation in economics, and apart from the early contributions of Leontief (1947), Theil (1954), Malinvaud (1956), Gorman (1959), and Grunfeld and Griliches (1960), the aggregation problem is rarely addressed explicitly in mainstream economics.1 Instead it is usually concealed through the use of terms and concepts such as industry, the economy, labour, and capital which, if they are to have any empirical relevance, correspond to broad categories, or are brushed aside by resorting to the Marshallian concept of the ‘representative’ agent.2 Neither response, however, is satisfactory if we are to retain the connection between micro and macro behaviour.3 A proper understanding of the aggregation problem and the particular method chosen for its resolution is of crucial importance for the interpretation and evaluation of applied research in economics.
Unfortunately, a satisfactory resolution of the aggregation problem is not, in general, possible. For example, in the case of aggregation over individual agents, valid aggregation requires a priori knowledge of the distribution of the explanatory or predictor variables across the micro units. Aggregation over commodities also involves special assumptions concerning functional separability that are often highly restrictive, and unlikely to hold in practice. Similarly, restrictive assumptions are also required in the case of aggregation over time and space. The main issue in applied econometrics is not, however, whether consistent aggregation is possible, but rather at what level of aggregation or disaggregation the analysis should be carried out. What is needed is a framework for choosing the appropriate level of disaggregation in particular economic applications.
1.2 Which level of aggregation?
The factors influencing the choice of level of aggregation or disaggregation can be summarized under the following broad headings:
•the purpose of the exercise,
•the specification errors involved,
•the data available,
•the attitude of the investigator towards the postulates of simplicity and parsimony.
The purpose of the exercise
The purpose of the exercise is paramount. If the purpose is pedagogical, if the aim is to describe or to understand the underlying mechanisms at work, then the stripping out of inessential detail is important. Aggregation becomes a simplifying technique permitting a clearer understanding of the nature of the forces at work. A good example of this is provided by Learner in chapter 7 below. If the purpose is policy formulation, for example the appropriate structure of an indirect tax, then disaggregation of policy instruments is essential and disaggregation of the revenue base into its major components is advisable. If it is forecasting, then the decisions which rely on the forecast will determine the minimum level of disaggregation required: aggregate macro variables such as GDP may suffice for the overall management of the economy, whilst the corporate planner may require sectoral forecasts of industrial markets.
The specification errors involved
In addition to the purpose behind the econometric exercise, the choice of level of disaggregation also depends on the relative magnitudes of the error of aggregation and the error of misspecification in the disaggregate model (see Pesaran et al, 1989). For example, as shown by Ilmakunnas in chapter 4, a disaggregated forecasting model distinguishing between housing and non-housing constructions may be more appropriate, especially for long forecast horizons, even if the purpose is to forecast the aggregate level of construction activity.
The data available
The limited amount of disaggregated data available and the high costs of collecting and processing new data place further constraints on the level of disaggregation which can be employed in particular applications. Although macroeconomic data are usually derived from disaggregated sources (only a few series such as banks' base rates or some tax rates remain fixed over periods of time and do not involve aggregation over products or space), the costs of maintaining large databanks in a consistent manner at a disaggregated level are high and often prohibitive for individual researchers. Much of the data published by government agencies rely on a few conventional disaggregations by industrial sectors or regions, largely determined by official requirements rather than by considerations of economic research. The limited amount of disaggregated data which are available are also often of doubtful quality, further constraining the extent of disaggregation contemplated by researchers. There is clearly an urgent need for a general improvement in economic measurements, but meanwhile the quantity and the quality of the disaggregated data available will be an important factor in the choice between the aggregated and the disaggregated models.
Simplicity and parsimony
Another important consideration in the choice of level of disaggregation in applied econometrics is the notion of ‘simplicity’ and the overwhelming aversion that seems to exist towards complex hypotheses. Other things being equal, simple hypotheses or models are often regarded as being preferable to more complex ones. The preference for simple hypotheses, referred to by Jeffreys (1937) as the ‘Simplicity Postulate’, is deep-rooted in human psychology and is often justified by appeal to Occam's razor.4 According to Jeffreys, ‘… the simpler hypothesis holds the field; the onus of proof is always on the advocate of the more complicated hypothesis’ (1937, p. 252). A similar point of view is also expressed by Zellner (1971). The application of this postulate to the choice between the aggregated and the disaggregated model is, however, far from straightforward. The difficulty partly lies with the formalization of the concept of ‘simplicity’ or ‘complexity’ and the method used to quantify it. For example, Jeffreys (1948, p. 100) initially adopts a measure of the complexity of a hypothesis in terms of the number of its ‘adjustable parameters’, which was proposed earlier by himself and Dorothy Wrinch (1921). Later, in the third edition of his Theory of Probability, he abandons this definition in favour of a more general one, applicable to any hypothesis expressible by differential equations. He defines the complexity of a differential equation by ‘the sum of the order, the degree and the absolute values of the coefficients’ (1961, p. 47). Popper (1959, chapter 7), taking a different methodological stand, identifies the simplicity of a hypothesis with its falsifiability. He writes: ‘The epistemological questions which arise in connection with the concept of simplicity can all be answered if we equate this concept with degree of falsifiability’ (1959, p. 140). Different conclusions can clearly emerge from the application of these concepts to aggregated and disaggregated models. But even if it is accepted that aggregated models are simpler than their disaggregated counterparts, it does not necessarily follow that they are less probable or less likely to be true. The simplicity postulate on its own is unlikely to provide a justification for the choice of aggregated models over the less aggregated ones. The choice of the appropriate level of disaggregation needs to be made empirically and in the context of particular applications.
1.3 The case for aggregation
When the disaggregated model is correctly specified and the available data are free from measurement errors, then the investigator could not do worse (whether for explaining facts or in predicting future behaviour) by adopting a disaggregated approach as compared to an aggregated one; and he or she may do better. The use of aggregated data and models in these circumstances is only justified for educational purposes, to make particular points and as an aid to understanding. The conditions under which aggregation is justified appear to be so restrictive that in nearly every case it would seem that the more disaggregated the analysis the better.5 Yet there are good reasons why the aggregated approach may be justified. The best known, set out by Grunfeld and Griliches (1960), is that the model specification may be less subject to errors at the macro level, rather than at the micro level as assumed by Theil.6 In such a case the macro relationship may provide a better explanation (goodness of fit) and better predictions than the micro relationships. But, if the mis-specification is such that the micro equations omit macro influences, then the remedy may be to respecify the micro equations to include macro variables.
Another reason for analysing at the macro level is that there are errors in variables at the micro level which may roughly cancel out when the micro variables are added together (cf. Aigner and Goldfeld, 1974). Examples of such errors are those caused by misclassification, when an item is classified to one disaggregated group when it should be in another. This is especially damaging in time-series analysis when the misclassification occurs in some time periods and not in others.
A further possible justification is that individual equations have unobserved influences which may cancel out in the aggregate. This will lead to a better fit for the macro equation, and can lead to better predictions if the unobserved influences continue to cancel each other in the prediction period.
The most potent reason for macro analysis, however, is again the availability of data: the investigator often has no choice in the matter, given limited resources for the collection and processing of data.
The usual procedure in applied work, demonstrated by the papers below, is to start at one or other end of the spectrum. If one starts with a macro analysis, the questions arise: what are the benefits of disaggregation and how can they be established at reasonable cost? Starting from a micro model and micro data, the questions are: what are the costs of aggregating in the sense of loss of information, and is aggregation necessarily bad?
1.4 The case for disaggregation
It is worth bringing together the arguments for the disaggregation of macro relationships as they emerge from several chapters in this book.
More information
When reliable disaggregated data are available, then in principle it should be possible to use the extra information in the data to develop and apply more powerful tests to the hypotheses of interest. Difficulties arise in maintaining consistency of definition at the disaggregated level and in the availability of data for the same time period.
Better predictions
With more reliable disaggregated information and a better understanding of micro behaviour, one would expect the disaggregated model to predict better than the aggregated model. The applications of the prediction criterion proposed by Grunfeld and Griliches (1960) so far suggest that there is not much to be gained from disaggregation when the problem is one of predicting macro variables. But the Grunfeld-Griliches criterion has been generally applied to cases where the equations in the disaggregated model all have the same specification. It is important that the possibility of different specifications across micro units be allowed for in the comparison of the predictive performance of the aggregated and the disaggregated model. In fact one of the advantages of disaggregation is that the specification can be varied across micro units to suit the circumstances. For example, in the case of the determination of imports, some commodities such as oil or basic foodstuffs may behave as if the imports were residual supplies whilst others would require a specification more suited to demand for differentiated products. An elementary case of different specification for disaggregated equations occurs when different macro variables are dropped from the micro equations according to economic relevance or statistical significance.
Better parameter estimates
Estimates of parameters from macro equations can be seriously misleading for understanding the mechanisms at work or for formulating policy, because they rely on a particular aggregation structure. For example, the chapter below by Lee, Pesaran, and Pierse (chapter 6) reports a wage elasticity estimated from disaggregated equations for UK industrial employment of –0.54, whilst that estimated from the aggregated counterpart is –0.97. This has the strong policy implication that reductions in real wage rates, by whatever means, may result in rather less impressive effects on employment demand than previous studies, based on aggregated results, have indicated.
The rest of this book is divided into two parts. Part I contains those chapters which are concerned with the question of optimal aggregation and the effects of aggregation on parameter estimates and predictive performance. The chapters which address the practical problems of introducing micro or disaggregated data into macro models or macro equations are included in Part II of the book. This covers the problems of linking macro and micro models and the use of disagg...
Table of contents
- Cover Page
- Half Title page
- Title Page
- Copyright Page
- Original Title Page
- Original Copyright Page
- Contents
- Tables
- Figures
- List of Contributors
- Preface
- 1 Disaggregation in econometric modelling — an introduction
- Part I
- Part II
- Bibliography
- Index
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Disaggregation in Econometric Modelling (Routledge Revivals) by Terry Barker in PDF and/or ePUB format, as well as other popular books in Social Sciences & Sociology. We have over one million books available in our catalogue for you to explore.