Mathematics

Point Estimation

Point estimation is a statistical method used to estimate an unknown parameter of a population based on sample data. It involves using a single value, such as the sample mean or median, to represent the population parameter. The goal is to find the best estimate of the parameter, taking into account factors like bias and variability.

Written by Perlego with AI-assistance

12 Key excerpts on "Point Estimation"

  • Book cover image for: Theory Sample Surveys and Statistical Decisions
    eBook - PDF

    Theory Sample Surveys and Statistical Decisions

    2nd Fully Revised and Enlarged Edition

    He may 8 Statistical Decision (Point and Internal Internal Estimation Theory) 162 The Theory of Sample Surveys and Statistical Decisions want to make a quite accurate guess about the feature on the basis of random sample drawn from the population. This type of problem is known as the “problem of estimation” and such unknown features are usually known as the “population parameter “. For examples, the population mean, population variance, population correlation coefficient, population regression coefficient etc. are known as the population parameters, or population constants. These parameters are estimated by the corresponding sample statistics (estimators) i.e. by sample mean, sample variance, sample correlation coefficient and sample regression coefficient respectively. The problem of estimation theory is classified into two parts known as (i) Point Estimation and (ii) Interval estimation. which will be discussed one by one. 8.4 Point Estimate An estimate of population parameter given by a single value computed from the sample observation is known as the point estimate of the corresponding parameter and such type of estimation problem is known as the problem of Point Estimation. There are various methods to obtain the point estimates of population parameters. For example if the population correlation coefficient r is equal to 0.75 and the sample correlation coefficient r is computed as 0.69 from the sample drawn from the population then the value r =0.69 is a point estimate of parameter value r = 0.75. 8.5 Properties of an Estimator For the same population parameter, there may exist several estimators. Then the problem arises which one must be considered as most appropriate estimator to work out the estimate of the parameter on the basis of sample observation. Here it is mentioned that there is a difference between the estimator and estimate of a population parameter and one should clearly know about this difference.
  • Book cover image for: Inferential Statistics
    C HAPTER - 1 Point Estimation 1.1 Write True/False Q.1 The process of knowing all the relevant information from the known sample to the unknown population, is known as “Statistical Inference.” Q.2 An enquier may be completely unknown to some feature of the population but he may want to guess completely about that feature based on a random sample drawn from the population. This type of problem is known as the “Problem of Estimation.” Q.3 The estimation of population parameter by a single value (an estimate of parameter) based on a random sample drawn from the population, is known as “Point Estimation.” Q.4 A process of getting an interval within which the parameter is expected to lie, is known as “Interval Estimation.” Q.5 An estimator having the properties of consistancy, sufficiency and efficiency (by Prof. R.A. Fisher) is known as “Best Estimator.” Q.6 Any statistic (estimator) whose mathematical expectation is equal to the population parameter, say  , is known as unbiased statistic (estimator) of the parameter. Q.7 An estmator whose mathematical expectation is not equal to the population parameter, is known as “Biased Estimator.” Q.8 Let t n (x 1 , x 2 , x 3 , .................... , x n ) be an estimator defined on a random sample (x 1 , x 2 , .................. , x n ) of size n drawn from a population having its parametric value, say  , then t n is said to be an unbiased estimator of  if E (t n ) =  Q.9 An estimator t n defined in  (8) is said to be a biased estimator of the 1 2 Inferential Statistics parameter  , if : E (t n )   Q.10 The amount of bias of an estimator t n ( ) x can be worked out using the expression.
  • Book cover image for: Mann's Introductory Statistics
    • Prem S. Mann(Author)
    • 2017(Publication Date)
    • Wiley
      (Publisher)
    8.1 Estimation, Point Estimate, and Interval Estimate 305 In this section, first we discuss the concept of estimation and then the concepts of point and inter- val estimates. 8.1.1 Estimation: An Introduction Estimation is a procedure by which a numerical value or values are assigned to a population parameter based on the information collected from a sample. In inferential statistics, μ is called the true population mean and p is called the true popu- lation proportion. There are many other population parameters, such as the median, mode, variance, and standard deviation. The following are a few examples of estimation: an auto company may want to estimate the mean fuel consumption for a particular model of a car; a manager may want to estimate the average time taken by new employees to learn a job; the U.S. Census Bureau may want to find the mean housing expenditure per month incurred by households; and a polling agency may want to find the proportion or percentage of adults who are in favor of raising taxes on rich people to reduce the budget deficit. The examples about estimating the mean fuel consumption, estimating the average time taken to learn a job by new employees, and estimating the mean housing expenditure per month incurred by households are illustrations of estimating the true population mean, μ. The example about estimating the proportion (or percentage) of all adults who are in favor of raising taxes on rich people is an illustration of estimating the true population proportion, p. If we can conduct a census (a survey that includes the entire population) each time we want to find the value of a population parameter, then the estimation procedures explained in this and subse- quent chapters are not needed.
  • Book cover image for: PROBABILITY AND STATISTICS FOR ENGINEERS AND SCIEN
    Notice that a caret or “hat” placed over a parameter signifies a statistic used as an estimate of the parameter. Of course, an experimenter does not in general believe that a point estimate ˆ θ is exactly equal to the unknown parameter θ . Nevertheless, good point estimates are chosen to be good indicators of the actual values of the unknown parameter θ . In certain situations, however, there may be two or more good point estimates of a certain parameter which could yield slightly different numerical values. Remember that point estimates can only be as good as the data set from which they are calculated. Again, this is a question of how representative the sample is of the population relating to the parameter that is being estimated. In addition, if a data set has some obvious outliers, then these observations should be removed from the data set before the point estimates are calculated. Point Estimates of Parameters A point estimate of an unknown parameter θ is a statistic ˆ θ that represents a “best guess” at the value of θ . There may be more than one sensible point estimate of a parameter. FIGURE 7.1 The relationship between a point estimate ˆ θ and an unknown parameter θ is the “best guess” of the parameter . The statistic ˆ Unknown parameter Probability distribution f( x , ) ˆ Point estimate (statistic) Not known by the experimenter Probability theory Known by the experimenter Statistical inference Data observations x 1 , ..., x n (sample) Copyright 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
  • Book cover image for: Probability and Statistics
    eBook - PDF

    Probability and Statistics

    A Didactic Introduction

    • José I. Barragués, Adolfo Morais, Jenaro Guisasola, José I. Barragués, Adolfo Morais, Jenaro Guisasola(Authors)
    • 2016(Publication Date)
    • CRC Press
      (Publisher)
    On its own, this might be regarded as a somewhat primitive method of estimating a population parameter, since much of the information in the original data set has been ‘lost’ in summarizing it by just one number. The key concepts at this stage are the notions of population parameter and point estimate , and it important to gain an intuitive appreciation of the ideas behind Point Estimation by discussing possible solutions to this problem in an informal manner (Lavy and Mashiach 2009, Kaplan et al. 2009). It is best initially to focus very much on the meaning of these concepts, and to contextualize them via our introductory problem; the importance of using real-life examples to teach abstract statistical concepts is discussed in (Mvududu and Kanyongo 2011). This helps circumvent issues arising from misconceptions associated with definitions or meanings. Indeed, statistical ideas are paramount here, rather than the memorization of formulas (O’Brien 2008). The dangers of over-mathematizing course content or over-emphasizing the teaching of formulas without having much concern for the underlying ideas are spelt out in (Batanero 2004, Schau and Mattern 1997). The Relevance of Sampling The notions of estimation and sampling are closely linked. Before considering this in detail, it is worth generating, possibly from an intuitive or naive perspective, some practical ideas for obtaining an estimate of , the mean length of stay of an inpatient at this hospital. For example, we could look at the hospital records of all the people discharged on a particular day, find the length of stay for each of them and then work out the sample mean x . Here are some questions to ponder over in this regard: i) Would we expect this to give us a good estimate of ? Is it likely to be representative of the population taken as a whole?
  • Book cover image for: Endocrine Manifestations of Systemic Autoimmune Diseases
    10 10 Point Estimation and Properties of Point Estimators 10.1 Statistics as Point Estimators We noted in Chapter 8 that a statistic is a characteristic of a sample that is used to estimate or determine a parameter θ . In particular, we specified a statistic as some function of the sample random variables X 1 , . . . , X n (and also of the sample size n), which itself is a random variable that does not depend upon any unknown parameters. Additionally, we denoted a statistic as T = g(X 1 , . . . , X n , n), where g is a real-valued function that does not depend upon θ or on any other unknown parameter. Then the realization of T , t = g(x 1 , . . . , x n , n), is determined once the realizations x i of the sample random variables X i , i = 1, . . . , n, are known. Thus g is a rule (typically expressed as a formula), which tells us how to get t’s from sets of x i ’s. If the statistic T is used to determine some unknown population parameter θ (or some function of θ , τ (θ )), then T is called a point estimator of θ , and its realization t is termed a point estimate of θ . Hence a point estimator renders a single numerical value as the estimate of θ . Specifying a statistic T involves a form of data reduction; that is, we summarize the information about θ contained in a sample by determining a few essential characteristics of the sample values. Hence, for purposes of making inferences about the parameter θ , we employ only the realization of T rather than the entire set of observed data points. Thus the role of T is that it reduces or condenses the n sample random variables X 1 , .
  • Book cover image for: Exercises and Solutions in Biostatistical Theory
    • Lawrence Kupper, Brian Neelon, Sean M. O'Brien(Authors)
    • 2010(Publication Date)
    4 Estimation Theory 4.1 Concepts and Notation 4.1.1 Point Estimation of Population Parameters Let the random variables X 1 , X 2 , . . . , X n constitute a sample of size n from some population with properties depending on a row vector θ = ( θ 1 , θ 2 , . . . , θ p ) of p unknown parameters, where the parameter space is the set Ω of all possible values of θ . In the most general situation, the n random variables X 1 , X 2 , . . . , X n are allowed to be mutually dependent and to have different distributions (e.g., different means and different variances). A point estimator or a statistic is any scalar function U ( X 1 , X 2 , . . . , X n ) ≡ U ( X ) of the random variables X 1 , X 2 , . . . , X n , but not of θ . A point estimator or statistic is itself a random variable since it is a function of the random vector X = ( X 1 , X 2 , . . . , X n ) . In contrast, the corresponding point estimate or observed statistic U ( x 1 , x 2 , . . . , x n ) ≡ U ( x ) is the realized (or observed) numerical value of the point estimator or statistic that is computed using the realized (or observed) numerical values x 1 , x 2 , . . . , x n of X 1 , X 2 , . . . , X n for the particular sample obtained. Some popular methods for obtaining a row vector ˆ θ = ( ˆ θ 1 , ˆ θ 2 , . . . , ˆ θ p ) of point estimators of the elements of the row vector θ = ( θ 1 , θ 2 , . . . , θ p ) , where ˆ θ j ≡ ˆ θ j ( X ) for j = 1, 2, . . . , p , are the following: 4.1.1.1 Method of Moments (MM) For j = 1, 2, . . . , p , let M j = 1 n n i = 1 X j i and E ( M j ) = 1 n n i = 1 E ( X j i ) , where E ( M j ) , j = 1, 2, . . . , p , is a function of the elements of θ . Then, ˆ θ mm , the MM estimator of θ , is obtained as the solution of the p equations M j = E ( M j ) , j = 1, 2, . . . , p . 183 184 Estimation Theory 4.1.1.2 Unweighted Least Squares (ULS) Let Q u = ∑ n i = 1 [ X i − E ( X i ) ] 2 .
  • Book cover image for: Statistical Concepts for the Behavioral Sciences
    Then we will discuss and s as es- timates of and , respectively. Finally, we will turn to determining the accuracy of as an estimate of .  X   2 s 2  (X ) Statistical inference 3 Estimating population values from statistics obtained from a sample. Inference 3 A process of reasoning from something known to something unknown. as a Point Estimator of The sample mean is often used as a point estimate of a population mean ( ). Point Estimation is estimating the value of a parameter as a single point from the value of a statistic. Consider the 100 Implicit View of Intelligence scores of Chapter 3’s Table 3.3. Suppose these scores were obtained from a randomly selected sample of students at a university. The mean for the sample of scores presented in Table 3.3 is 31.8, and the standard deviation is 6.6. This sample mean of 31.8 may be used as an unbiased and consistent estimator of , the mean Im- plicit View of Intelligence score of the population from which the sample was selected. Unbiased Estimator An unbiased estimator is a statistic for which, if an infinite number of random samples of a certain size is obtained, the mean of the values of the statistic equals the parameter being esti- mated. The mean, obtained from a sample selected from a population is an unbiased esti- mator of that population mean, because, if we take all possible random samples of size N from a population, then the mean of the sample means equals . On the other hand, if we calculate a sample variance by dividing the sum of squares by N, we have a biased estimator of , for it consistently underestimates . But as we indicated in Chapter 5, the estimated population variance, , calculated by dividing by N 1, provides an unbiased estimate of . Consistent Estimator A consistent estimator is a statistic for which the probability that the statistic has a value closer to the parameter increases as the sample size increases.
  • Book cover image for: Statistical Theory
    eBook - PDF

    Statistical Theory

    A Concise Introduction

    Chapter 2 Point Estimation 2.1 Introduction As we have discussed in Section 1.1, estimation of the unknown parameters of dis-tributions from the data is one of the key issues in statistics. Suppose that Y 1 , ..., Y n is a random sample from a distribution f q ( y ) , where f q ( y ) is a probability or a density function for a discrete or continuous random variable, respectively, assumed to belong to a parametric family of distributions F q , q 2 Q . In other words, the data distribution f q is supposed to be known up to the unknown parameter(s) q 2 Q . For example, the birth data in Example 1.1 is a random sample from a Bernoulli B ( 1 , p ) distribution with the unknown parameter p . The goal is to estimate the unknown q from the data. We start from a general definition of an estimator: Definition 2.1 A (point) estimator ˆ q = ˆ q ( Y ) of an unknown parameter q is any statistic used for estimating q . The value of ˆ q ( y ) evaluated for a given sample is called an estimate . Thus, ¯ Y , Y max = max ( Y 1 , ..., Y n ) , and Y 3 log ( | Y 1 | ) -Y Y 5 2 are examples of estimators of q . This is a general, somewhat trivial definition that still does not say anything about the goodness of estimation. One would evidently be interested in “good” estimators. In this chapter we firstly present several methods of estimation and then define and discuss their goodness. 2.2 Maximum likelihood estimation This is probably the most used method of estimation. Its underlying idea is simple and intuitively clear. Recall that given a random sample Y 1 , ..., Y n ⇠ f q ( y ) , q 2 Q , the likelihood function L ( q ; y ) defined in Section 1.2 is the joint probability (for a dis-crete random variable) or density (for a continuous random variable) of the observed data as a function of an unknown parameter(s) q : L ( q ; y ) = ’ n i = 1 f q ( y i ) . As we have discussed, L ( q ; y ) is the measure of likeliness of a parameter’s values q for the observed data y .
  • Book cover image for: Business Analytics
    • Jeffrey Camm, James Cochran, Michael Fry, Jeffrey Ohlmann(Authors)
    • 2020(Publication Date)
    All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 262 Chapter 6 Statistical Inference As is evident from Table 6.2, the point estimates differ somewhat from the values of cor- responding population parameters. This difference is to be expected because a sample, and not a census of the entire population, is being used to develop the point estimates. Practical Advice The subject matter of most of the rest of the book is concerned with statistical inference, of which Point Estimation is a form. We use a sample statistic to make an inference about a population parameter. When making inferences about a population based on a sample, it is important to have a close correspondence between the sampled population and the target population. The target population is the population about which we want to make infer- ences, while the sampled population is the population from which the sample is actually taken. In this section, we have described the process of drawing a simple random sample from the population of EAI employees and making point estimates of characteristics of that same population. So the sampled population and the target population are identical, which is the desired situation. But in other cases, it is not as easy to obtain a close correspondence between the sampled and target populations. Consider the case of an amusement park selecting a sample of its customers to learn about characteristics such as age and time spent at the park. Suppose all the sample ele- ments were selected on a day when park attendance was restricted to employees of a large company.
  • Book cover image for: Modern Engineering Statistics
    Is this really a large sample approximation since we are acting as if the sample interquartile range is equal to the population interquartile range? That depends on how we define the population. If our interest centers only on a given year, then we have the population, assuming that the numbers were reported accurately. Similarly, σ could be estimated using other percentiles, if known, of a normal distribu- tion, and in a similar way σ x could be estimated in general when X does not have a normal distribution if similar percentile information were known. 4.7 SUMMARY Parameter estimation is an important part of statistics. Various methods are used to estimate parameters, with least squares and maximum likelihood used extensively. The appeal of the former is best seen in the context of regression analysis (as will be observed in Chapter 8), and maximum likelihood is appealing because maximum likelihood estimators maximize the probability of observing the set of data that, in fact, was obtained in the sample. For whatever estimation method is used, it is desirable to have estimators with small variances. 134 Point Estimation The best way to make the variance small is to use a large sample size, but cost considerations will generally impose restrictions on the sample size. Even when data are very inexpensive, a large sample size could actually be harmful as it could result in a “significant” result in a hypothesis test that is of no practical significance. This is discussed further in Section 5.9. In general, however, large sample sizes are preferable. REFERENCES Casella, G. and R. L. Berger (1990). Statistical Inference. Pacific Grove, CA: Brooks/Cole. (The current edition is the second edition, 2001.) Efron, B. (2002). The bootstrap and modern statistics. In Statistics in the 21st Century (A. E. Raftery, M. A. Tanner, and M. T. Wells, eds.). New York: Chapman Hall/CRC and Alexandria, VA: American Statistical Association.
  • Book cover image for: Systematic Glossary of the Terminology of Statistical Methods
    eBook - PDF
    • I. Paenson(Author)
    • 2014(Publication Date)
    • Pergamon
      (Publisher)
    prononcee. Le fait que ces distributions d'echantillon-nage 7) soient plus ou moins normales est d'une tres grande importance dans la theorie statistique *. B. Point Estimation B. ^ESTIMATION PONCTUELLE (1) Point Estimation assigns a single true value to the parameter being estimated. The four major criteria applied in the evaluation of estimating methods are the following: (a) the estimates must be unbiased 2) : a given statistic 3) t e is an unbiased estimate 2) of the corresponding population parameter 4) Θ if the , , χ ., [mathematical expectation 5a) l r ,, latter is the . , . * } of the [ expected value 5 b > J former /i.e. if the arithmetic mean of the t e values converges in probability 6) —as the number of samples of fixed size N increases—to /; (b) the estimates must be consistent 7) T : a given statistic 3) , t e , is a consistent estimate 7) of the corresponding population parameter 4) Θ if, as the sample size N tends to infinity, the values of t e converge in probability 6) to θ; (c) the efficiency of the estimates 8) must be as great as possible, i.e. the variance / 2 / of their sampling distribution 9) /their sampling vari-ance 10) / must be as small as possible: obviously, statistics 3) whose distribution is concentrated about the parameter 4) - to be estimated will give us better estimates than statistics 3) whose dis-tribution is marked by considerable dispersion; (d) the estimates must be sufficient n ) , i.e. contain all the information—about the parameter 4) to be estimated—inherent in the sample. (1) L'estimation ponctuelle attribue une seule vraie valeur au parametre ä estimer. On applique quatre criteres majeurs suivants pour evaluer les methodes d'esti-mation υ : (a) les valeurs estimees doivent etre sans biais 2) : une fonction des observations 3) , t e , donnee represente une valeur estimee sans biais 2) du parametre 4) Θ correspondant de l'ensemble statistique si Θ est Pesperance mathematique 5) de la fonction des observations 3) /c.-a-d. si la moyenne arith-metique des differentes valeurs, t e , converge en probabilite 6) — au fur et ä mesure que Xtnombre des echantillons de dimension fixe TV augmente— vers Θ/ ; (b)
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.