Economics
Statistical Error
Statistical error in economics refers to the discrepancy between a true value and the value estimated from a sample due to random variation. It encompasses both sampling error, which arises from using a sample to estimate a population parameter, and non-sampling error, which includes measurement and processing errors. Understanding and accounting for statistical error is crucial for accurate economic analysis and decision-making.
Written by Perlego with AI-assistance
Related key terms
1 of 5
4 Key excerpts on "Statistical Error"
- eBook - PDF
- Robert M. Groves(Author)
- 2005(Publication Date)
- Wiley-Interscience(Publisher)
1.7 DEBATES ABOUT INFERENTIAL ERRORS Although the review of error terminology above cites most of the key differences, it underemphasizes one striking difference between the approaches of survey statistics, on one hand, and those of psychometrics and econometrics, on the other. Survey statistics concerns itself with the estimation of characteristics of a finite population fixed at a moment in time. Psychometrics and econometrics concern themselves with the articulation of causal relationships among variables. Sometimes these two viewpoints are labeled descriptive versus analytic uses of data (Deming, 1953; Anderson and Mantel, 1983) within the survey statistics N W Variance Bias Sampliw I/ - Variance Errors in Variables Sekction Bias Shaded concepts are not central to viewpoint of econometrics Figure 1.7 The structure and language of errors used in econometrics for estimates of regression model parameters. 30 1. AN INTRODlJCTION T O SURVEY ERRORS field, and applied versus theoretical uses of data by those in the quasi-experimental design group (Calder et al., 1982,1983; Lynch, 1982). Those who build causal models using data most often concentrate on the correct specification of the form of the models and less so on various errors of nonobservation. Those who use the data for descriptive purposes are more concerned with nonresponse and noncoverage issues. The debates in psychology about the relative importance of external validity (e.g., Mook, 1983) and in statistics about design based inference or model based inference essentially revolve around how the researcher conceptualizes the impact of errors of nonobservation. These debates are discussed in more detail in Chapters 3 and 6. 1.8 IMPORTANT FEATURES OF LANGUAGE DIFFERENCES The sections above provide the kind of information common to a glossary, but they do not highlight the several reasons for misunderstanding among the various groups mentioned in Section 1.1. - eBook - PDF
Model Building in Economics
Its Purposes and Limitations
- Lawrence A. Boland(Author)
- 2014(Publication Date)
- Cambridge University Press(Publisher)
As noted in 203 Chapter 9, to conduct an empirical test one must know something about the available data. Unlike my illustrative examples using singular observations, usually in empirical economics the available data are in the form of non- experimental statistical data. The main obstacle for using statistical data in economics (either for forecasting or for testing) is that one must in effect specify (i.e., create) a statistical model by making assumptions about the nature of that data in order to determine the best estimator – that is, the statistical method needed (one that captures the statistical information in the data) to use for the forecast or test. Specifically, what do we know about the nature of the errors in that data? Unfortunately, for non-experimental data, we know very little – we do not know the probabilistic structure of the possibly different types of error. Errors could be simply measurement errors, but they could be sampling errors. However, for experimental data, we can by design neutralize other effects in order to render the error white noise. So, we might ask, are data normally distributed such that samples of the data have errors with constant means and constant variation and are independently and identically gen- erated? If the method to be used, for example, requires normally distributed errors but the data are not normally distributed – or more importantly, the data are not independent and identically distributed 1 – we would have to say that any statistical model or estimation method, such as classical linear regression, that requires normally, identically and independently distrib- uted errors would be statistically inadequate for the intended forecast or test, as the data are misspecified for such a regression. - eBook - PDF
Data Analysis for Social Science
A Friendly and Practical Introduction
- Elena Llaudet, Kosuke Imai(Authors)
- 2022(Publication Date)
- Princeton University Press(Publisher)
It measures the amount of variation of the estimator around the true value of the population-level parameter. FIGURE 7.1. Sampling distribution of an estimator. All the estimators covered in this book have a sampling distribution that is approximately normal and centered at the true value of the population-level parameter. The standard error of an esti- mator quantifies the spread of its sampling distribution, which is a measure of the degree of uncertainty of the estimator. true value standard error QUANTIFYING UNCERTAINTY 199 Note that since we usually draw only one sample from the popu- estimate 1 estimate 2 true value lation, we can compute only one value of the estimator. This one estimate might be close to the true value of the parameter (as is, for example, the value of estimate 1 in the figure in the margin), or it might be quite far away (as is the value of estimate 2 ). When working with only one sample of data, we never know how far our estimate is from the true value since the true value is unknown. The difference between the estimate and the true value is called The estimation error is the difference between the estimate and the true value of the parameter. The average estimation error, also known as bias, is the average difference between the estimate and the true value of the parameter over multiple hypothetical samples. An estimator is said to be unbiased if the average estimation error over multiple hypothetical samples is zero. The standard error is an estimate of the average size of the estimation error over multiple hypothetical samples. the estimation error: estimation error i = estimate i − true value where: - estimation error i is the estimation error for sample i - estimate i is the estimate for sample i - true value is the true value of the population-level parameter. In the hypothetical cases above, the estimation errors would be (estimate 1 −true value) and (estimate 2 −true value). - eBook - ePub
- Marcel Boumans, Giora Hon, Arthur Petersen, Arthur C. Petersen(Authors)
- 2015(Publication Date)
- Routledge(Publisher)
58- Incongruous measurement errors: data Z 0 do not adequately quantify the concepts envisioned by the theory. This, more than the other substantive sources of error, is likely to be the most serious factor in ruining the trustworthiness of empirical evidence.59
- Statistical misspecification errors: one or more of the probabilistic assumptions of the statistical model
Mθ(z ) is invalid for data Z 0 .- Substantive inadequacy errors : the circumstances envisaged by the theory in question differ ‘systematically’ from the actual data-generating mechanism underlying the phenomenon of interest. This inadequacy can easily arise from impractical ceteris paribus clauses, missing confounding factors, false causal claims, etc.60
Statistical versus Substantive Premises of Inference
The confusion between statistical and substantive assumptions permeates the whole econometric literature. A glance at the complete specification of the linear regression model in econometric textbooks reveals that along the probabilistic assumptions pertaining to the observable processes (made via the error term) underlying the data in question, there are substantive assumptions concerning omitted variables,61 measurement errors and non-simultaneity.62The key question is: how can one disentangle the two types of assumptions?Error statistics views empirical modelling as a piecemeal process that relies on distinguishing between thestatistical Mθ(z ) andsubstantive models Mϕ(z), clearly delineating the following two questions:- statistical adequacy: does Mθ(z) account for the chance regularities in Z 0 ?
- substantive adequacy: is model Mφ(z) adequate as an explanation (causal or otherwise) of the phenomenon of interest?
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.



