Physics

Uncertainties and Evaluations

In physics, uncertainties refer to the potential errors or variations in measurements or calculations. Evaluations involve assessing and quantifying these uncertainties to determine the reliability and accuracy of the results. Understanding and accounting for uncertainties is crucial in physics to ensure the validity of experimental data and the precision of theoretical predictions.

Written by Perlego with AI-assistance

9 Key excerpts on "Uncertainties and Evaluations"

  • Book cover image for: Theory and Design for Mechanical Measurements, International Adaptation
    • Richard S. Figliola, Donald E. Beasley(Authors)
    • 2023(Publication Date)
    • Wiley
      (Publisher)
    But most often we do not know the true value of the measured variable but rather only know its measured value instead. So we will not know the exact values of the errors affecting the measurement. Instead, we draw from what we do know to estimate a probable range to define the limits of error in the measurement. This estimate of the limits of the error is an assigned value called the uncertainty. The uncertainty is associated with an interval about the measured value within which we suspect that the true value must fall at a stated level of probability. Uncertainty analysis is the process of identifying, quantifying, and combining the estimated values of the errors in reporting test results. Uncertainty is a property of the result. The outcome of a measurement is a result, and the uncertainty quantifies the quality of that result. Uncertainty analysis provides a powerful design tool useful for evaluating different measurement systems and methods, designing a test plan, and/or reporting uncertainty. This chapter presents a systematic approach for identifying, quantifying, and combining the estimates of the errors in a measurement. While the chapter stresses the methodology of analyses, we emphasize the concomitant need for an equal application of critical thinking and profes- sional judgment in applying the analyses. The quality of an uncertainty analysis depends on the engineer’s knowledge of the test, the measured variables, the equipment, and the measurement procedures [1]. Errors are effects, and uncertainties are numbers. In the context of an uncertainty analysis, an error is an effect that causes a measured value to differ from the true value. The uncertainty is an assigned numerical value that quantifies the probable limits of the error. This chapter approaches uncertainty analysis as an evolution of information from test design through final data analy- sis.
  • Book cover image for: Experimental Methods for Science and Engineering Students
    eBook - PDF

    Experimental Methods for Science and Engineering Students

    An Introduction to the Analysis and Presentation of Data

    4 Dealing with Uncertainties 4.1 Overview: What Are Uncertainties? An experiment may require that measurements be made with simple equipment such as a stopwatch or a metre ruler, or it may involve sophisticated instruments such as those carried by NASA’s Mars Curiosity Rover, designed to analyse rock, soil and air samples. Through measurement we are trying to determine the value of a quantity such as the time for an object to fall a fixed distance to the ground, the distance between a lens and an image formed by the lens or the chemical compos- ition of minerals found on Mars. If we were to make repeat measurements of a quantity, we would likely find that the values obtained would vary one to the next. This leads us to the idea that there is an amount of uncertainty in values obtained through measurement. In this chapter we will look at ways of recognising and dealing with uncertainties arising from measurement. 4.1.1 Example of Variability in Values Obtained through Measurement Consider an experiment in which a small object falls through a fixed distance and the time for it to fall is measured using a stopwatch. Table 4.1 contains 10 values recorded for the time of fall. We might have hoped that, on each occasion when we measured the time of fall of the object, we would obtain the same value. This is not true in this experiment, and in general it is untrue of any experiment. 1 We must acknowledge that variabil- ity in values obtained through measurement is an inherent feature of all experi- mental work. What we need to be able to do is recognise, examine and quantify the variation; otherwise the reliability of our experiment may be questioned, and any conclusions drawn from the experiment may be of limited value. If it is possible to identify the main cause(s) of the variation in the experimental data, then we may be able to redesign the experiment to reduce the variability.
  • Book cover image for: Theory and Design for Mechanical Measurements
    • Richard S. Figliola, Donald E. Beasley(Authors)
    • 2015(Publication Date)
    • Wiley
      (Publisher)
    Instead, we draw from what we do know about the measurement to estimate a range of probable error. This estimate is an assigned value called the uncertainty. The uncertainty describes an interval about the measured value within which we suspect that the true value must fall with a stated probability. Uncertainty analysis is the process of identifying, quantifying, and combining the errors. Uncertainty is a property of the result. The outcome of a measurement is a result, and the uncertainty quantifies the quality of that result. Uncertainty analysis provides a powerful design tool for evaluating different measurement systems and methods, designing a test plan, and reporting uncertainty. This chapter presents a systematic approach for identifying, quantifying, and combin- ing the estimates of the errors in a measurement. While the chapter stresses the methodology of analyses, we emphasize the concomitant need for an equal application of critical thinking and professional judgment in applying the analyses. The quality of an uncertainty analysis depends on the engineer’s knowledge of the test, the measured variables, the equipment, and the measurement procedures (1). Errors are effects, and uncertainties are numbers. While errors are the effects that cause a measured value to differ from the true value, the uncertainty is an assigned numerical value that quantifies the probable range of these errors. 161 This chapter approaches uncertainty analysis as an evolution of information from test design through final data analysis. While the structure of the analysis remains the same at each step, the number of errors identified and their uncertainty values may change as more information becomes available. In fact, the uncertainty in the result may increase. There is no exact answer to an analysis, just the result from a reasonable approach using honest numbers.
  • Book cover image for: Dealing with Data
    eBook - PDF

    Dealing with Data

    The Commonwealth and International Library: Physics Division

    • Arthur J. Lyon, W. Ashhurst(Authors)
    • 2013(Publication Date)
    • Pergamon
      (Publisher)
    One can be exact and completely error-free only in pure mathematics or formal logic; and one achieves it there at the expense of no longer saying anything factual about the real world. Knowledge of the world, of na-ture, or of human society is always incomplete, approximate, and subject to error; and the results of physical measurements are no exception to this rule. We might say that liability to error and uncertainty is the price we must pay if we want to be able to make statements having real factual content. In mathematical physics, for EXPERIMENTAL ERRORS 3 example, it is possible to make statements which are exact, but they are essentially hypothetical in character, stating that // certain assumptions hold, such and such conclusions must follow. In the experimental sciences we wish to discover what assumptions and laws are actually valid, and within what con-ditions and limits they remain valid ; but even our most con-fident judgements will be subject to elements of qualitative un-certainty and our best numerical values to some degree of inexactness. The levels of precision actually achieved in modern scientific and technological work, and the enormous successes achieved in the practical application of scientific results, show that such limitations do not prevent continual progress and improve-ment. The failures which also occur, and the revisions which are constantly made as science progresses, show, on the other hand, that the limitations are real. 2. READING AND SETTING ERRORS Most measurements involve the reading of some type of scale, and the most obvious type of uncertainty in a measure-ment is that associated with the limits of the accuracy to which the scale can be read.
  • Book cover image for: The Uncertainty of Measurements
    eBook - PDF

    The Uncertainty of Measurements

    Physical and Chemical Metrology: Impact and Analysis

    163 7 Measurement Uncertainty and Its Evaluation: Evolving Concepts MEASUREMENT UNCERTAINTY: LIMITS TO ERROR A measurement result whose accuracy is entirely unknown is worth nothing. Accuracy depends upon the error of measurement. It is also accepted that the actual error of a measurement result is unknown and unknowable. This is a contradictory situation. Without knowing its error, one cannot utilize a mea- surement result confidently for making a decision, but at the same time its error is unknown and unknowable. Metrologists have found a solution to this contradiction in the application of statistical tools. If the absolute value of error cannot be determined, the “limits to error” can always be “inferred” by using rudimentary statistical techniques. Here the term limits to error means the maximum value of expected error. This is a worst-case situation in the sense that the error will not exceed the limit. The actual error will be less than the limits-to-error value. Another term used in the context of limits to error is inferred. When we infer something, we are not sure about its correctness, so any inference about limits to error always has a risk of being incorrect. If the risk of being incorrect is reasonably low, the knowledge about the limits to error is better than not knowing anything about the quantity of error at all. The “limits to error” of a measurement result is in fact the uncertainty of measurement. The uncertainty is generated by two factors: • One factor is the precision of the measurement process by which the result has been derived. This is evaluated through repeated application of the measurement process to obtain repeat measurements of the same parameter. Precision is the characteristic of the measurement process linked to the closeness of repeat measurements among themselves. Thus, a numerical index of precision should come from the variability of the repeat measurement observations.
  • Book cover image for: Intelligent System and Computing
    • Yang (Cindy) Yi(Author)
    • 2020(Publication Date)
    • IntechOpen
      (Publisher)
    If not, the second most severe or less known uncertainty source is considered, and so on. Once one source of uncertainty has been chosen, it is acted upon and the test u Y ð Þ < u ∗ is run again. The sequence is repeated until u Y ð Þ < u ∗ or the measurement is recognised incompatible with the given target uncertainty. • The measurement result is presented in a form that also contains an expression of the evaluated uncertainty. As much detail as possible about how the evaluation was performed is recommended by the GUM. Specific guidance is given in its Section 7 [1]. The uncertainty of the measurand Y that appears in the target uncertainty test is ‘ evaluated ’ and not ‘ estimated ’ . The verb ‘ to evaluate ’ is used to highlight that the input quantities X i are typically grouped into two categories. The standard uncer-tainty of those in the first group is determined by their repeated observation (Type A evaluation, Section 4.2 in [1]). The standard uncertainty of those in the second is instead determined by the ‘ scientific judgement based on all of the available information ’ (Type B evaluation, cf. Section 4.3 in [1]). In the first case, uncertainty evaluation is based on probability density functions estimated from frequency distribution of observations. In the second, the evaluation is based on probability density functions postulated on the basis of reputable sources of information like handbooks or calibration certificates. In both Type A and Type B evaluations, the complete characterisation of the probability density functions p X i ð Þ of the input quantities is needed. Complete characterisation means that the mean E X i ð Þ ¼ μ i , the standard uncertainty u X i ð Þ and the distribution type (e.g. normal, triangular, rectangular/uniform) must be made available for each X i . Then, the measurement function is expanded into a partial sum of the Taylor series around the input quantity means.
  • Book cover image for: Workshop Physics Activity Guide Module 1
    • Priscilla W. Laws, David P. Jackson, Brett J. Pearson(Authors)
    • 2023(Publication Date)
    • Wiley
      (Publisher)
    However, such mistakes can, at least in principle, always be eliminated by being very care- ful, checking our work, having someone else check our work, etc. Therefore, 36 WORKSHOP PHYSICS ACTIVITY GUIDE throughout this book we will assume there are no “human errors” and instead focus on inherent uncertainties and systematic errors. Inherent Uncertainties The limitations of the rulers in Section 2.2 lead to inherent uncertainty; one can only be so precise with the measuring tool available, so there is always some uncertainty inherent in the measurement process. Such inherent uncertainties do not result from mistakes or errors. Instead, they are attributed (at least in part) to the impossibility of building measuring equipment that is precise to an infinite number of significant figures. The ruler provides us with an example of this. It can be made better and better, but it always has an ultimate limit of precision. There are also examples of inherent uncertainties that are not related to the measuring device. For instance, if you measure the width of a door, you might find that you get slightly different values depending on where the width is mea- sured. Clearly, a door in the real world is not going to be a perfect rectangle with opposite sides being exactly parallel. We know that there will be imperfections in the door, and these imperfections lead to an inherent uncertainty in quan- tities such as its width. In some sense, the door doesn’t really have a perfectly well-defined width. Finally, inherent uncertainty can also be part of the process being studied, as we will discuss later. Systematic Errors Systematic errors result when some type of error occurs over and over again in a systematic way. For example, suppose you are making a distance measurement and use a ruler that was poorly calibrated. In this case, a careful reading of 5.0 cm on the ruler might actually correspond to a true measurement of 4.9 cm.
  • Book cover image for: An Introduction to Uncertainty in Measurement
    eBook - PDF

    An Introduction to Uncertainty in Measurement

    Using the GUM (Guide to the Expression of Uncertainty in Measurement)

    However, just as in the previous case of usefully repeatable measurements with their ‘visible’ or explicit scatter, the uncertainty of the correction can be estimated as representing notionally the implicit scatter of its associated random errors. So, whether or not we have usefully repeatable measurements, the measurand is measured with an uncertainty that is described as follows. 4.2 Uncertainty is a parameter that characterises the dispersion of values The dispersion of data is characterised numerically by a standard deviation (defined in section 4.3). From this standard deviation, it is common practice to obtain a ‘ ± ’ figure. This figure describes the range of values that is very likely to enclose the true value of the measurand. The number following the ‘ ± ’ is normally about twice the standard deviation of the measurand and can be loosely referred to as the ‘uncertainty’ attaching to the measurand. As will be discussed in chapter 10, this uncertainty is referred to in the GUM as the ‘expanded’ uncertainty, expressing the ‘expansion’ by that factor of about two from the standard deviation of the measurand. If a value of a mass is given as (1.24 ± 0.13) kg, the actual value is asserted as very likely to be somewhere between 1.11 kg and 1.37 kg. The uncertainty is 0.13 kg and we note that uncertainty, like standard deviation, is a positive quantity. By contrast, an error may be positive or negative. 4.2.1 Type A and Type B categories of uncertainty These do not differ in essence, but are given these names in order to convey the notion that they are evaluated in different ways . 44 Uncertainty in measurement 4.2.1.1 Type A uncertainties are evaluated by statistical methods In a common situation, a sequence of repeated measurements giving slightly differ-ent values (because of random errors) is analysed by calculating the mean and then considering individual differences from this mean.
  • Book cover image for: Uncertainty
    eBook - PDF

    Uncertainty

    A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis

    Uncertainties in empirical quantities can arise from a variety of different kinds of sources. The appropriate method to characterize the uncertainty, and the appropriate method for trying to reduce it, generally depend on the particular kind of source. Hence, we have found it helpful to classify uncertainty in empirical quantities in terms of the different kinds of source from which it can arise. These include the following: • Statistical variation • Subjective judgment • Linguistic imprecision • Variability • Inherent randomness • Disagreement • Approximation In the following section we will distinguish and discuss each source of uncer- tainty in turn. 4.4.1. Random Error and Statistical Variation The most-studied and best-understood kind of uncertainty arises from random error in direct measurements of a quantity. No measurement of an empirical 4.4. Uncertainty in Empirical Quantities 57 quantity, such as the speed of light, can be absolutely exact. Imperfections in the measuring instrument and observational technique will inevitably give rise to variations from one observation to the next. This is as true of the complex apparatus used to measure the speed of light, even though it may have an accuracy of 1 part in 10 12 , as it is of a wooden yardstick used by a carpenter with an accuracy of 1 in 1,000. It is just the relative size of the errors that may be different. The resulting uncertainty depends on the size of the variations between observations and the number of observations taken. The armamentarium of statistics provides a variety of well-known techniques for quantifying this uncertainty, such as standard deviation, confidence intervals, and others. A number of these are discussed in Chapter 5. 4.42. Systematic Error and Subjective Judgment Any measurement involves not only random error but also systematic error.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.