Mathematics

Basic Probability

Basic probability is a branch of mathematics that deals with the likelihood of an event occurring. It involves calculating the chances of different outcomes in a given situation. This concept is fundamental in understanding uncertainty and making informed decisions in various fields such as statistics, finance, and science.

Written by Perlego with AI-assistance

12 Key excerpts on "Basic Probability"

  • Book cover image for: Applied Medical Statistics
    • Jingmei Jiang(Author)
    • 2022(Publication Date)
    • Wiley
      (Publisher)
    Therefore, in this chapter, we intro-duce probability concepts and useful notation that are most pertinent to biomedicine and biostatistical analysis. 3 Fundamentals of Probability CONTENTS 3.1 Sample Space and Random Events 54 3.1.1 Definitions of Sample Space and Random Events 54 3.1.2 Operation of Events 55 3.2 Relative Frequency and Probability 58 3.2.1 Definition of Probability 59 3.2.2 Basic Properties of Probability 59 3.3 Conditional Probability and Independence of Events 60 3.3.1 Conditional Probability 60 3.3.2 Independence of Events 60 3.4 Multiplication Law of Probability 61 3.5 Addition Law of Probability 62 3.5.1 General Addition Law 62 3.5.2 Addition Law of Mutually Exclusive Events 62 3.6 Total Probability Formula and Bayes’ Rule 63 3.6.1 Total Probability Formula 63 3.6.2 Bayes’ Rule 64 3.7 Summary 65 3.8 Exercises 65 3 Fundamentals of Probability 54 3.1 Sample Space and Random Events In nature, people often encounter two types of phenomena: One is the deterministic phenomenon , which is characterized by conditions under which the results are com-pletely predictable, that is, the same result is observed each time the experiment is conducted. For example, heavy objects thrown into the sky inevitably fall to the ground because of the earth’s gravity, and water at 100°C under standard atmospheric pressure inevitably boils. The other is the random phenomenon , which is characterized by con-ditions under which the results are not predictable, that is, one of several possible outcomes is observed each time the experiment is conducted, for example, the out-come (heads or tails) of flipping a coin and the number of calls received by an emergency center in an hour. However, the actual appearance of the predicted result is accidental in a random phenomenon, such as predicting heads when we flip a coin. These occasional phenomena demonstrate a certain regularity after many repeated experiments and observations, which is regarded as a statistical law.
  • Book cover image for: Statistics and Probability for Engineering Applications
    • William DeCoursey(Author)
    • 2003(Publication Date)
    • Newnes
      (Publisher)
    In this chapter we examine the basic ideas and approaches to probability and its calculation. We look at calculating the probabilities of combined events. Under some circumstances probabilities can be found by using counting theory involving permu-tations and combinations. The same ideas can be applied to somewhat more complex situations, some of which will be examined in this chapter. 2.1 Fundamental Concepts (a) Probability as a specific term is a measure of the likelihood that a particular event will occur. Just how likely is it that the outcome of a trial will meet a particular requirement? If we are certain that an event will occur, its probability is 1 or 100%. If it certainly will not occur, its probability is zero. The first situation corresponds to an event which occurs in every trial, whereas the second corresponds to an event which never occurs. At this point we might be tempted to say that probability is given by relative frequency, the fraction of all the trials in a particular experiment that give an outcome meeting the stated requirements. But in general that would not be right. Why? Because the outcome of each trial is determined by chance. Say we toss a fair coin, one which is just as likely to give heads as tails. It is entirely possible that six tosses of the coin would give six heads or six tails, or anything in between, so the relative frequency of heads would vary from zero to one. If it is just as likely that an event will occur as that it will not occur, its true probability is 0.5 or 50%. But the experiment might well result in relative frequencies all the way from zero to one. Then the relative frequency from a small number of trials gives a very unreliable indication of probability. In section 5.3 we will see how to make more quantitative calcula-tions concerning the probabilities of various outcomes when coins are tossed randomly or similar trials are made.
  • Book cover image for: Probability, Statistics, and Stochastic Processes for Engineers and Scientists
    • Aliakbar Montazer Haghighi, Indika Wickramasinghe(Authors)
    • 2020(Publication Date)
    • CRC Press
      (Publisher)
    51 2 Basics of Probability 2.1 BASICS OF PROBABILITY In this book, unless otherwise stated, we will assume that occurrences of events are not deterministic. In other words, the events cannot be entirely determined by the initial states and inputs. Instead, they are perceived to be random, probabilistic, or stochastic. Nature of these occurrences of events is random, probabilistic, or stochas-tic. We will start with basic definitions and develop, in the process, the vocabulary needed to discuss the subject matter with the clarity sufficient even to address the more advanced materials to be encountered later in this book. Definition 2.1 When an experiment is performed, its results are referred to as the outcomes . If such results are not deterministic, the experiment is called a chance experiment , a random experiment , or a trial . Thus, from now to the end of this book, we will be considering random experi-ments with uncertain outcomes. Definition 2.2 A set of outcomes is called an event . An event with only one outcome, that is, a sin-gleton, is referred to as an elementary or simple event . Hence, in general, an event is a collection of simple events. An event may be defined as an element of a σ -field, denoted by 𝕊 ( defined in Chapter 1) of subsets of the sample space Ω . Occurrence of an event depends on its member outcomes that take place. The collection of all possible events is called the sample space , usually denoted by Ω . Thus, an event is a subset of the sample space, whose elements are called the sample points . Definition 2.3 A sample space that contains a finite or a countable collection of sample points is referred to as a discrete sample space . However, a sample space that contains an uncountable collection of sample points is referred to as a continuous sample space . 52 Probability, Statistics, Stochastic Processes Example 2.1 The following are examples of finite and infinite discrete sample spaces: i.
  • Book cover image for: Statistical Methods Of Geophysical Data Processing
    • Vladimir Troyan, Yurii Kiselev(Authors)
    • 2010(Publication Date)
    • World Scientific
      (Publisher)
    Chapter 1 Basic concepts of the probability theory Subject of the probability theory is the calculus of random events , which have a statistical stability (statistical stability of frequencies) under given requirements of the idealized experiment. The probability theory is deductive one and based on the system of postulates. The probability theory is a basis for methods of mathematical statistics , which use the inductive method of decision-making about properties of the objects or about hypothesis concerning a nature of an investigated phenomenon with the use of the data obtained as a result of conducting of experiment. The geophysical observation are the results of random experiments and are ran-dom events. Because of the random nature of the observations the question arises as to the probability that these random events occur. 1.1 The Definition of Probability A central concept in the theory of probability is the random event . These random events are the results from measurements or experiments, which are uncertain. It is important to know the probability with which random events occur. 1.1.1 Set of elementary events Let some experiment has a finite number of outcomes ω 1 , ω 2 , . . . , ω n . The outcomes ω 1 , . . . , ω n are called the elementary events , and their set Ω = { ω 1 , ω 2 , . . . , ω n } is called the space of elementary events or the space of outcomes . Example 1.1. At single tossing of a coin the space of elementary events Ω = { H , T } consists of two points: where H is a head and T is a tail. Example 1.2. At n -multiple tossing of the coin the space of elementary events Ω consists of combinations of outcomes of the single experiment Ω = { ω : ω = ( a 1 , a 2 , . . . , a n ) a i = H or T } 1 2 STATISTICAL METHODS OF GEOPHYSICAL DATA PROCESSING with total number of outcomes N ( ω ) = 2 n .
  • Book cover image for: Random Phenomena
    eBook - PDF

    Random Phenomena

    Fundamentals of Probability and Statistics for Engineers

    • Babatunde A. Ogunnaike(Author)
    • 2009(Publication Date)
    • CRC Press
      (Publisher)
    Chapter 3 Fundamentals of Probability Theory 3.1 Building Blocks ........................................................ 58 3.2 Operations ............................................................. 61 3.2.1 Events, Sets, and Set Operations ............................. 61 3.2.2 Set Functions .................................................. 64 3.2.3 Probability Set Function ...................................... 66 3.2.4 Final Considerations .......................................... 68 3.3 Probability ............................................................ 69 3.3.1 The Calculus of Probability ................................... 69 3.3.2 Implications ................................................... 71 3.4 Conditional Probability ............................................... 72 3.4.1 Illustrating the Concept ...................................... 72 3.4.2 Formalizing the Concept ...................................... 73 3.4.3 Total Probability .............................................. 74 3.4.4 Bayes’ Rule .................................................... 76 3.5 Independence .......................................................... 77 3.6 Summary and Conclusions ............................................ 78 REVIEW QUESTIONS ........................................ 79 EXERCISES ................................................... 80 APPLICATION PROBLEMS ................................. 84 Before setting out to attack any definite problem it behooves us first, without making any selection, to assemble those truths that are obvious as they present themselves to us and afterwards, proceeding step by step, to inquire whether any others can be deduced from these. Ren´ e Descartes (1596–1650) The paradox of randomly varying phenomena—that the aggregate ensem-ble behavior of unpredictable, irregular, individual observations is stable and regular—provides a basis for developing a systematic analysis approach.
  • Book cover image for: Probability and Statistics for Economists
    CHAPTER 1 Basic Probability THEORY 1.1 INTRODUCTION Probability theory is foundational for economics and econometrics. Probability is the mathematical lan- guage used to handle uncertainty, which is central for modern economic theory. Probability theory is also the foundation of mathematical statistics, which is the foundation of econometric theory. Probability is used to model uncertainty, variability, and randomness. When we say that something is “uncertain”, we mean that the outcome is unknown. For example, how many students will there be in next year’s Ph.D. entering class at your university? “Variability” means that the outcome is not the same across all occurrences. For example, the number of Ph.D. students fluctuates from year to year. “Randomness” means that the variability has some sort of pattern. For example, the number of Ph.D. students may fluctuate between 20 and 30, with 25 more likely than either 20 or 30. Probability gives us a mathematical language to describe uncertainty, variability, and randomness. 1.2 OUTCOMES AND EVENTS Suppose you take a coin, flip it in the air, and let it land on the ground. What will happen? Will the result be “heads” (H) or “tails” (T)? We do not know the result in advance, so we describe the outcome as random. Suppose you record the change in the value of a stock index over a period of time. Will the value increase or decrease? Again, we do not know the result in advance, so we describe the outcome as random. Suppose you select an individual at random and survey them about their economic situation. What is their hourly wage? We do not know in advance. The lack of foreknowledge leads us to describe the outcome as random. We will use the following terms. An outcome is a specific result. For example, in a coin flip, an outcome is either H or T. If two coins are flipped in sequence, we can write an outcome as HT for a head and then a tails. A roll of a six-sided die has the six outcomes {1, 2, 3, 4, 5, 6}.
  • Book cover image for: Reliability Engineering and Risk Analysis
    eBook - PDF

    Reliability Engineering and Risk Analysis

    A Practical Guide, Third Edition

    • Mohammad Modarres, Mark P. Kaminskiy, Vasiliy Krivtsov(Authors)
    • 2016(Publication Date)
    • CRC Press
      (Publisher)
    15 2 Basic Reliability Mathematics Review of Probability and Statistics 2.1 INTRODUCTION In this chapter, we discuss the elements of mathematical theory that are relevant to the study of the reliability of physical objects. We begin with a presentation of basic concepts of probability. Then, we briefly consider some fundamental concepts of statistics that are used in reliability analysis. 2.2 ELEMENTS OF PROBABILITY Probability is a concept that people use formally and casually every day. Weather forecasts are probabilistic in nature. People use probability in their casual conversations to show their perception of the likely occurrence or nonoccurrence of particular events. Odds are given for the outcomes of sport events and are used in gambling. The formal use of probability concepts is widespread in sci-ence, in astronomy, biology, and engineering. In this chapter, we discuss the formal application of probability theory in the field of reliability engineering. 2.2.1 S ETS AND B OOLEAN A LGEBRA To perform operations associated with probability, it is often necessary to use sets. A set is a col-lection of items or elements, each with some specific characteristics. A set that includes all items of interest is referred to as a universal set , denoted by Ω . A subset refers to a collection of items that belong to a universal set. For example, if set Ω represents the collection of all pumps in a power plant, then the collection of electrically-driven pumps is a subset E of Ω . Graphically, the relation-ship between subsets and sets can be illustrated through Venn diagrams. The Venn diagram in Figure 2.1 shows the universal set Ω by a rectangle and subsets E 1 and E 2 by circles. It can also be seen that E 2 is a subset of E 1 . The relationship between subsets E 1 and E 2 and the universal set can be symbolized by E 2 ⊂ E 1 ⊂ Ω .
  • Book cover image for: Introduction To The Basics Of Reliability And Risk Analysis, An
    The only fundamental difference is that it is not necessary to resort to the procedure of taking a limit. Let us consider an experiment with N possible elementary, mutually exclusive and equally probable outcomes A, A, ).. ., A, . We are interested in the event E which occurs if anyone of M elementary outcomes occurs, A,, A, y...y A,, i.e. E = A , U A , U ... UA,. Since the events are mutually exclusive and equally probable, (4.7) number of outcomes of interest total number of possible outcomes p p > = This result is very important because it allows computing the probability with the methods of combinatorial calculus; its applicability is however limited to the case in which the event of interest can be decomposed in a finite number of mutually exclusive and equally probable outcomes. Furthermore, the classical definition of probability entails the possibility of performing repeated trials; it requires that the number of outcomes be finite and that they be equally probable, i.e. it defines probability resorting to a concept of frequency. 30 4 Basic of Probability Theory for Applications to Reliability and Risk Analysis 4.3.4 Probability space Once a probability measure is defined in one of the above illustrated ways, the mathematical theory of probability is founded on the three fundamental axioms of Kolmogorov introduced in Section 4.3.1, independently of the definition. All the theorems of probability follow from these three axioms. When assigning probability values to events of a sample space, a difficulty arises for continuous sample spaces, e.g. R = (0,l). Indeed, continuous intervals cannot be constructed by adding elementary points in a countable manner and correspondingly, probabilities of continuous intervals cannot be assigned by the addition law of probability. In other words, if we were to assign to each E E (0,l) a probability p(E) , then the sum of all p ( E ) ’s would go to infinity, unless p ( E ) = 0 for ‘almost all’ E E (0,l).
  • Book cover image for: Applied Statistics for Civil and Environmental Engineers
    • N. T. Kottegoda, R. Rosso(Authors)
    • 2009(Publication Date)
    • Wiley-Blackwell
      (Publisher)
    Basic Probability Concepts 53 for simple events A 1 , A 2 , A 3 , and A 4 , respectively. The probabilities, assigned on the basis of relative frequencies, are Pr[ A 1 ] = 5/36, Pr[ A 2 ] = 15/36, Pr[ A 3 ] = 10/36, Pr[ A 4 ] = 6/36, which satisfy Axiom i . Since 5/36 + 15/36 + 10/36 + 6/36 = (5 + 15 + 10 + 6)/36 = 1, Pr[] = Pr[ A 1 + A 2 + A 3 + A 4 ] = 1, so that Axiom ii is also satisfied. Consider two mutually exclusive events, say, C = A 2 ≡ { S: c/4 ≤ S < c/2} and D = A 3 + A 4 ≡ { S: c/2 ≤ S < c}. By combining the foregoing frequencies, one can see that event C occurred 15 times in 36 years, whereas event D occurred 10 + 6 = 16 times during that period. The associated probabilities are thus Pr[C ] = 15/36 and Pr[ D] = 16/36, respectively. Since C + D = A 2 + A 3 + A 4 , Pr[C + D] = Pr[ A 2 + A 3 + A 4 ] = (15 + 10 + 6)/36 = 15/36 + 16/36 = Pr[C ] + Pr[ D], which satisfies Axiom iii. The theory of probability deals logically with the relationships among probability mea- sures. Because of the deductive character of the theory, one can develop all such relation- ships entirely from the three axioms described by Eqs. (2.2.1) to (2.2.3). 2.2.3 Addition rule The third axiom states that the basic addition property of probability can be extended to any sequence of mutually exclusive events. If A 1 , A 2 ,..., A k ∈ A, and A i A j = Ø for any i = j , with i , j = 1, 2,..., k , then Pr[ A 1 + A 2 + ··· + A k ] = Pr[ A 1 ] + Pr[ A 2 ] + ··· + Pr[ A k ]. (2.2.4) From this rule one can derive a number of further properties of probability that can be used to perform additive operations in the event space, such as union and intersection of events. Axiom ii can be applied to an event A and its complement, A c ; A and A c jointly satisfy the conditions for exclusive events, thus obtaining Pr[ A + A c ] = Pr[ A] + Pr[ A c ]. But since A + A c = , we also have Pr[ A + A c ] = Pr[] = 1 from Eq. (2.2.2). By combining these results, Pr[ A c ] = 1 − Pr[ A].
  • Book cover image for: Classical and Quantum Information Theory
    eBook - PDF

    Classical and Quantum Information Theory

    An Introduction for the Telecom Scientist

    1 Probability basics Because of the reader’s interest in information theory, it is assumed that, to some extent, he or she is relatively familiar with probability theory, its main concepts, theorems, and practical tools. Whether a graduate student or a confirmed professional, it is possible, however, that a good fraction, if not all of this background knowledge has been somewhat forgotten over time, or has become a bit rusty, or even worse, completely obliterated by one’s academic or professional specialization! This is why this book includes a couple of chapters on probability basics. Should such basics be crystal clear in the reader’s mind, however, then these two chapters could be skipped at once. They can always be revisited later for backup, should some of the associated concepts and tools present any hurdles in the following chapters. This being stated, some expert readers may yet dare testing their knowledge by considering some of this chapter’s (easy) problems, for starters. Finally, any parent or teacher might find the first chapter useful to introduce children and teens to probability. I have sought to make this review of probabilities basics as simple, informal, and practical as it could be. Just like the rest of this book, it is definitely not intended to be a math course, according to the canonic theorem–proof–lemma–example suite. There exist scores of rigorous books on probability theory at all levels, as well as many Internet sites providing elementary tutorials on the subject. But one will find there either too much or too little material to approach Information Theory, leading to potential discouragement. Here, I shall be content with only those elements and tools that are needed or are used in this book. I present them in an original and straightforward way, using fun examples. I have no concern to be rigorous and complete in the academic sense, but only to remain accurate and clear in all possible simplifications.
  • Book cover image for: Risk Assessment and Decision Analysis with Bayesian Networks
    5 The Basics of Probability 5.1Introduction
    In discussing the difference between the frequentist and subjective approaches to measuring uncertainty, we were careful in Chapter 4 not to mention the word probability . That is because we want to define probability in such a way that it makes sense for whatever reasonable approach to measuring uncertainty we choose, be it frequentist, subjective, or even an approach that nobody has yet thought of. To do this in Section 5.2 we describe some properties (called axioms) that any reasonable measure of uncertainty should satisfy; then we define probability as any measure that satisfies those properties. The nice thing about this way of defining probability is that not only does it avoid the problem of vagueness, but it also means that we can have more than one measure of probability. In particular, we will see that both the frequentist and subjective approaches satisfy the axioms, and hence both are valid ways of defining probability.
    In Section 5.3 we introduce the crucial notion of probability distributions. In Section 5.4 we use the axioms to define the crucial issue of independence of events. An especially important probability distribution—the Binomial distribution—which is based on the idea of independent events, is described in Section 5.5 . Finally in Section 5.6 we will apply the lessons learned in the chapter to solve some of the problems we set in Chapter 2 and debunk a number of other probability fallacies.
    5.2Some Observations Leading to Axioms and Theorems of Probability
    Before stating the axioms of probability we are going to list some points that seem to be reasonable and intuitive for both the frequentist and subjective definitions of chance. So, consider again statements like the following:
  • Book cover image for: A Primer in Probability
    • Kathleen Subrahmaniam(Author)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    . . , k determine the probability model for a random experiment. From the probability model we can obtain the probability of any event asso-ciated with the experiment. DEFINITION 2.4 The probability of any event E is the sum of the prob-abilities of the simple events which constitute the event E. Two somewhat special cases arise: the entire sample space and the impossible event. Since all the possible outcomes of an experiment must be enumerated in the sample space, Pr(S) = 1. This would be translated as som e event in S must occu r,n which seems very reasonable from the defi-nition of S. If any event is not a possible outcome of the experiment, then it has no corresponding sample points in S. We will call this event an im pos-sible event and its probability is obviously zero. 12 Chapter 2 Referring to our penny-nickel experiment, let us develop an appropri-ate probability model and calculate the probabilities corresponding to the events U, V, W, X, Y and Z. It would seem reasonable, and it can be veri-fied by experimentation, that each outcome is equally likely. Assigning probability 1/4 to each point in S 2 and using Definition 2.4, we find that Pr(U) = 1/2, Pr(V) = 1/4, Pr(W) = 3/4, Pr(X) = 1/2, Pr(Y) = 1/2 and Pr(Z) = 1/4. 2.3 COMBINING EVENTS Since sets and events are analogous, we will now discuss how events, like sets, may be combined. In combining events we are faced with the problem of translating words into logical expressions» In everyday usage, expres-sions of the form A or B” may be interpreted in two different ways: 1. Exclusive A or B but not both 2. Inclusive A or B or both In the following discussion we shall restrict ourselves to the inclusive form . Simultaneous membership in two sets A and B is expressed in words by the terminology and. DEFINITION 2.6 The intersection of the events A and B in S is the set of all points belonging to A and to B» A and B = AB = A fl B = {x |x e A and x < e B }.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.