Mathematics

Independent Events Probability

Independent events in probability refer to two or more events that do not affect each other's outcomes. In other words, the occurrence of one event does not influence the probability of the other event happening. When events are independent, the probability of both events occurring can be found by multiplying their individual probabilities. This concept is fundamental in probability theory and has applications in various fields.

Written by Perlego with AI-assistance

11 Key excerpts on "Independent Events Probability"

  • Book cover image for: Entropy Demystified: The Second Law Reduced To Plain Common Sense
    eBook - PDF

    Entropy Demystified: The Second Law Reduced To Plain Common Sense

    The Second Law Reduced to Plain Common Sense

    For random variables, “independent” and “uncorrelated” events are different con-cepts. For single events, the two concepts are identical. 40 Entropy Demystified We can calculate the following two conditional probabilities: Pr { of A / given B } = 1 / 3 > Pr { of A } = 1 / 6 Pr { of A / given C } = 0 < Pr { of A } = 1 / 6 In the first example, the knowledge that B has occurred increases the probability of the occurrence of A . Without that knowledge, the probability of A is 1 / 6 (one out of six possibili-ties). Given the occurrence of B , the probability of A becomes larger , 1 / 3 (one out of three possibilities). But given that C has occurred, the probability of A becomes zero, i.e., smaller than the probability of A without that knowledge. It is important to distinguish between disjoint (i.e., mutu-ally exclusive events) and independent events. Disjoint events are events that are mutually exclusive; the occurrence of one excludes the occurrence of the second. Being disjoint is a prop-erty of the events themselves (i.e., the two events have no com-mon elementary event). Independence between events is not defined in terms of the elementary events comprising the two events, but in terms of their probabilities. If the two events are disjoint, then they are strongly dependent . The following example illustrates the relationship between dependence and the extent of overlapping. Let us consider the following case. In a roulette, there are altogether 12 numbers { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 } Each of us chooses a sequence of six consecutive numbers, say, I choose the sequence: A = { 1, 2, 3, 4, 5, 6 } and you choose the sequence: B = { 7, 8, 9, 10, 11, 12 } The ball is rolled around the ring. We assume that the roulette is “fair,” i.e., each outcome has the same probability Introduction to Probability Theory, Information Theory, and all the Rest 41 Fig.
  • Book cover image for: Introduction to Financial Mathematics
    Notice that, despite the fact that the English definitions of the words “independent” and “disjoint” are slightly similar, their probabilistic meanings are very different. When events are disjoint, they have no outcomes in common, so that, if one is known to have occurred, then the other cannot occur. This would change the probability of the second drastically; in fact, if we know that the first event occurred, the second event then has probability zero. So disjoint events cannot be independent in the sense we are using. The most useful property that independence has does not involve conditional probability as the intuition suggests, but instead involves the occurrence of se-quences of independent events. Intuition and common sense support you well here; if two coins are flipped “independently,” then it is very natural to assume that the probability that the first is a head and the second is also a head is the product 1 2 · 1 2 . We will take this factorization condition as our definition. 236 CHAPTER 3. DISCRETE PROBABILITY FOR FINANCE Definition 1 Definition 1 Definition 1. Two events A and B are independent independent independent of each other if the probability that both occur factors into the product of the individual probabilities, that is: P [ A ∩ B ] = P [ A ] · P [ B ] . (3.53) More generally, several events A 1 , A 2 , ... , A n , are mutually independent mutually independent mutually independent if the probability of the intersection of any subcollection of the events factors into the product of the probabilities of the events in that subcollection. Notice that this gives us a new language to talk about the binomial branch process. In an n -step process, we are assuming that all events pertaining to transitions in different time steps are independent.
  • Book cover image for: Mathematics NQF4 SB
    eBook - PDF
    • M Van Rensburg, I Mapaling M Trollope(Authors)
    • 2017(Publication Date)
    • Macmillan
      (Publisher)
    354 Module 13 Use experiments, simulations and probability distribution to set and explore probability models Module 13 Overview By the end of this module you should be able to: • Unit 13.1: Explain and distinguish between the following terminology/events: – Probability. – Dependent events. – Independent events. – Mutually exclusive events. – Mutually inclusive events. – Complementary events. • Unit 13.2: Make predictions based on validated experimental probabilities taking the following into account: – P(S) = 1 (where S is the sample space). – Disjoint (mutually exclusive) events P(A or B) = P(A) + P(B). – Complementary events, therefore being able to calculate the probability of an event not occurring. – P(A) or (B) = P(A) + P(B) − P(A and B) (where A and B are events within a sample space). – Correctly identify dependent and independent events and apply the product rule for independent events: P(A and B) = P(A) ∙ P(B). • Unit 13.3: Draw tree diagrams and Venn diagrams, complete two-way contingency tables to solve probability problems, and interpret and clearly communicate the results of experiments correctly in terms of real context. Unit 13.1: Probability terminology Probability is a measure of the relative likelihood of an event taking place. Or, put differently, it tells us how likely something is to happen. The extent to which something is probable can be found either by theoretical means or by doing an experiment. When we use theory to solve a probability problem, we use logical thinking and can apply a formula to solve the problem. When we conduct an experiment, we perform activities. Examples are when you toss a coin, roll a die or a pair of dice, select a card from a deck of cards, or a ball or marble from a bag of the relevant items. Table 13.1 explains the different probability terminology. Where applicable, the example shows how the term would be represented in a Venn diagram.
  • Book cover image for: Introduction to Statistics and Data Analysis
    • Roxy Peck, Chris Olsen, , Tom Short, Roxy Peck, Chris Olsen, Tom Short(Authors)
    • 2019(Publication Date)
    There is also a multiplication rule for more than two independent events. The independence of more than two events is an important concept in studying complex systems with many components. If these components are critical to the operation of a machine, assessing the probability of the machine’s failure is undertaken by analyzing the failure probabilities of the individual components. Do the work ➤ Interpret the results ➤ Bluehand/Dreamstime.com Understand the context ➤ Interpret the results ➤ Copyright 2020 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. CHAPTER 6 Probability 310 Multiplication Rule for k Independent Events Events E 1 , E 2 , … , E k are independent if knowledge that any of the events have occurred does not change the probabilities that any particular one or more of the other events has occurred. Independence implies that P s E 1 > E 2 > Á > E k d 5 P s E 1 d P s E 2 d Á P s E k d This means that when events are independent, the probability that all occur together is the product of the individual probabilities. This relationship also holds if one or more of the events is replaced by its complement. In Example 6.19 we take a rather simplified view of a desktop computer to illustrate the use of the multiplication rule. Example 6.19 Computer Configurations Suppose that a desktop computer system consists of a monitor, a mouse, a keyboard, the computer processor itself, and storage devices such as a disk drive. Most computer system problems due to manufacturing defects occur soon in the system’s lifetime.
  • Book cover image for: Elements of Probability Theory
    • L. Z. Rumshiskii(Author)
    • 2016(Publication Date)
    • Pergamon
      (Publisher)
    For example, in the throwing of an unbiased die the approximate equality of the relative fre-quencies with which the six faces appear is explained by its sym-metry, giving the same possibility of occurrence to each number from 1 to 6. ι 2 ELEMENTS OF P R O B A B I L I T Y THEORY Thus we assign to an event a number called the probability of the event. This measures the degree of possibility of occurrence of the event, in the sense that the relative frequencies of this event obtained from repetitions of the experiment are grouped near this number. As with the relative frequency of an event, its probability must be dimensionless, a constant lying between 0 and 1. However, we emphasize the difference that, whereas the relative frequency still depends on the carrying out of trials, the probability of an event is connected only with the event itself (as a possible outcome of a given experiment). Thus probability is the first basic idea, and in general it is im-possible to define it more simply. As we shall show in the following section, we can calculate the probabilities directly only in certain very simple schemes; the analysis of these simple experiments will allow us to establish the basic properties of probability which we shall need for the further development of the theory. § 2 . THE CLASSICAL D E F I N I T I O N OF P R O B A B I L I T Y Let us first of all agree on some notation. Events are called mutually exclusive if they cannot occur simultaneously. A collec-tion of events form a partition if at each trial one and only one of the events must occur; i.e. if the events are pair-wise mutually exclusive and if only one of them occurs.
  • Book cover image for: Essentials of Statistics for Business & Economics
    • David Anderson, Dennis Sweeney, Thomas Williams, Jeffrey Camm(Authors)
    • 2019(Publication Date)
    Note that the multiplication law for independent events provides another way to determine whether A and B are independent. That is, if P ( A ∩ B ) = P ( A ) P ( B ), then A and B are independent; if P ( A ∩ B ) ≠ P ( A ) P ( B ), then A and B are dependent. As an application of the multiplication law for independent events, consider the situa-tion of a service station manager who knows from past experience that 80% of the custom-ers use a credit card when they purchase gasoline. What is the probability that the next two customers purchasing gasoline will each use a credit card? If we let A 5 B 5 the event that the first customer uses a credit card the event that the second customer uses a credit card then the event of interest is A ∩ B . Given no other information, we can reasonably assume that A and B are independent events. Thus, P ( A > B ) 5 P ( A ) P ( B ) 5 (.80)(.80) 5 .64 To summarize this section, we note that our interest in conditional probability is motivated by the fact that events are often related. In such cases, we say the events are dependent and the conditional probability formulas in equations (4.7) and (4.8) must be used to compute the event probabilities. If two events are not related, they are independent; in this case neither event’s probability is affected by whether the other event occurred. Do not confuse the notion of mutually exclusive events with that of independent events. Two events with nonzero proba-bilities cannot be both mutually exclusive and independent. If one mutually exclusive event is known to occur, the other cannot occur; thus, the probability of the other event occur-ring is reduced to zero. They are therefore dependent. N O T E S + C O M M E N T S Copyright 2020 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
  • Book cover image for: Applied Medical Statistics
    • Jingmei Jiang(Author)
    • 2022(Publication Date)
    • Wiley
      (Publisher)
    Therefore, in this chapter, we intro-duce probability concepts and useful notation that are most pertinent to biomedicine and biostatistical analysis. 3 Fundamentals of Probability CONTENTS 3.1 Sample Space and Random Events 54 3.1.1 Definitions of Sample Space and Random Events 54 3.1.2 Operation of Events 55 3.2 Relative Frequency and Probability 58 3.2.1 Definition of Probability 59 3.2.2 Basic Properties of Probability 59 3.3 Conditional Probability and Independence of Events 60 3.3.1 Conditional Probability 60 3.3.2 Independence of Events 60 3.4 Multiplication Law of Probability 61 3.5 Addition Law of Probability 62 3.5.1 General Addition Law 62 3.5.2 Addition Law of Mutually Exclusive Events 62 3.6 Total Probability Formula and Bayes’ Rule 63 3.6.1 Total Probability Formula 63 3.6.2 Bayes’ Rule 64 3.7 Summary 65 3.8 Exercises 65 3 Fundamentals of Probability 54 3.1 Sample Space and Random Events In nature, people often encounter two types of phenomena: One is the deterministic phenomenon , which is characterized by conditions under which the results are com-pletely predictable, that is, the same result is observed each time the experiment is conducted. For example, heavy objects thrown into the sky inevitably fall to the ground because of the earth’s gravity, and water at 100°C under standard atmospheric pressure inevitably boils. The other is the random phenomenon , which is characterized by con-ditions under which the results are not predictable, that is, one of several possible outcomes is observed each time the experiment is conducted, for example, the out-come (heads or tails) of flipping a coin and the number of calls received by an emergency center in an hour. However, the actual appearance of the predicted result is accidental in a random phenomenon, such as predicting heads when we flip a coin. These occasional phenomena demonstrate a certain regularity after many repeated experiments and observations, which is regarded as a statistical law.
  • Book cover image for: Probability and Random Variables
    3 Conditional Probability and Independence 3.1 INTRODUCTION We have now considered several methods which will help us to count the points in various kinds of events when the sample space consists of a finite number of points. The impression may have been given that 'equally likely outcomes' are somehow always self-evident. Difficulties arising involve either hidden or unwarranted assumptions about the population being sampled. In a much-quoted example, it is held that if a fair coin is tossed twice, the probability that heads will appear at least once is 2/3. The basis of the argument is that there are three cases to consider - heads on the first throw, heads on the second throw, and heads on neither; two of these are favourable, hence the probability is 2/3. The objection is that these cases are neither equally hkely nor mutually exclusive since heads on the first throw does not preclude heads on the second throw. E. Parzen [1] discusses in detail calculating the probability that for a randomly chosen month the thirteenth is a Friday. At first sight, every day of the week seems equally likely, and hence the probability is 1/7. However, a count of all cases over a stipulated long period of time gives a different result. This question of the actual relative frequency of an event is important if useful predictions are to be made in real situations. An unborn child may be either a boy or a girl. The probability that it will be a boy is not 1/2, but nearer 0.52 if the actual proportion of boys over several decades is used as an estimate. 3.2 EVALUATING PROBABILITIES Example 1 Two cards are drawn without replacement from a well-shuffled pack. Calculate the probability that at least one card is a club.
  • Book cover image for: Probability Theory
    eBook - PDF

    Probability Theory

    A First Course in Probability Theory and Statistics

    • Werner Linde(Author)
    • 2016(Publication Date)
    • De Gruyter
      (Publisher)
    It is intuitively clear that these two events occur independently of each other. But how to express this mathematically? To answer this question, think about the probability of A under the condition B. The fact whether or not B occurred has no influence on the occurrence of A. For the occurrence or nonoccurrence of A, it is completely insignificant what happened in the first roll. Mathematically this means that P(A|B) = P(A). Let us check whether this is true in this concrete case. Indeed, it holds P(A) = 1/3 as well as P(A|B) = P(A ∩ B) P(B) = 6/36 1/2 = 1/3 . The previous example suggests that independence of A of B could be described by P(A) = P(A|B) = P(A ∩ B) P(B) . (2.12) But formula (2.12) has a disadvantage, namely we have to assume P(B) > 0 to ensure that P(A|B) exists. To overcome this problem, rewrite eq. (2.12) as P(A ∩ B) = P(A) P(B) . (2.13) In this form, we may take eq. (2.13) as a basis for the definition of independence. 2.2 Independence of Events 95 Definition 2.2.2. Let (K, A, P) be a probability space. Two events A and B in A are said to be (stochastically) independent provided that P(A ∩ B) = P(A) ⋅ P(B) . (2.14) In the case that eq. (2.14) does not hold, the events A and B are called (stochastic- ally) dependent. Remark 2.2.3. In the sequel, we use the notations “independent” and “dependent” without adding the word “stochastically.” Since we will not use other versions of independence, there should be no confusion. Example 2.2.4. A fair die is rolled twice. Event A occurs if the first roll is either “1” or “2” while B occurs if the sum of both rolls equals 7. Are A and B independent? Answer: It holds P(A) = 1/3, P(B) = 1/6 as well as P(A ∩ B) = 2/36 = 1/18. Hence, we get P(A ∩ B) = P(A) ⋅ P(B) and A and B are independent. Question: Are A and B also independent if A is as before and B is defined as a set of pairs with sum 4? Example 2.2.5. In an urn, there are n, n ≥ 2, white balls and also n black balls.
  • Book cover image for: Probability and Statistics
    eBook - PDF

    Probability and Statistics

    A Didactic Introduction

    • José I. Barragués, Adolfo Morais, Jenaro Guisasola, José I. Barragués, Adolfo Morais, Jenaro Guisasola(Authors)
    • 2016(Publication Date)
    • CRC Press
      (Publisher)
    11. Assume the hypothesis of the equiprobability of the 24 elementary events in E. Are A and B independent? In this case, p(A)=8/24=1/3, p(A/B)=4/12=1/3. Thus, the two events are independent. Note the graphic interpretation of independence: the weight (probabilistically speaking) of the event A in the sample space is identical to the weight that the part of A and B has in the subspace B. Figure 11. Sample space E and two events A and B. Theory-summary Table 4 Probabilistic independence: Two events A, B are called independent if p(A/B)=p(A), or equivalently if p(B/A)=p(B), or equivalently if p(A ∩ B)=p(A)p(B). In addition, A , B and A , B are also independent events. The probabilistic independence of A and B means that the verification of the events does not alter the probability that the other event will be verified. Probabilistic dependence: The events A and B are called dependent if they are not independent, that is, if p(A/B) ≠ p(A), or equivalently if p(B/A) ≠ p(B), or equivalently if p(A ∩ B) ≠ p(A)p(B). The probabilistic dependence of A and B means that if one of the events has been verified, it modifies the probability of verifying the other event. Probability 59 Exercise 15. Let us consider an urn U(3b,5n). The experiment consists of drawing a ball and afterwards drawing a second ball without returning the first ball to the urn. We repeated the experiment 1048 times, obtaining the results shown in Table 4. The aim is to estimate, calculate and interpret the probabilities of the following events: b1, n1, b2, n2, b1 and b2, b1 and n2, n1 and b2, n1 and n2, b2/b1, b2/n1, n2/b1, n2/n1, n1/n2, b1/b2 b1/ n2, b1 and b2, b1 and n2, n1 and b2, n1 or n2. Exercise 16. Let two events be A and B. Prove that using the values of p(A), p(B) and p(A/B) it is possible to obtain the following values: p( A ), p( B ), p(A ∩ B), p(A ∪ B), p(B/A), p(A/ B ), p(B/ A ), p( B /A), p( A /B), p( B / A ), p( A B ∩ ), p( A B ∪ ).
  • Book cover image for: Introduction to Probability and Statistics Metric Edition
    • William Mendenhall, Robert Beaver, Barbara Beaver, , William Mendenhall, Robert Beaver, Barbara Beaver(Authors)
    • 2019(Publication Date)
    Then ( ) ( )( ) ( ) ( ) ∩ P A P D P A D    .60 .44 .264 and .35 Since these two probabilities are not the same, events A and D are dependent . 2. You could also calculate ) ( ) ( ∩    ( ) .35 .44 .80 ) P A D P A D P D Since  ( ) .80 ) P A D and  ( ) .60 P A , we again conclude that events A and D are dependent. 3. A third option is to calculate ) ) ( ( ∩    ( ) .35 .60 .58 ) P D A P A D P A while  ( ) .44 P D . Again we see that A and D are dependent events. The Difference between Mutually Exclusive and Independent Events Many students find it hard to tell the difference between mutually exclusive and independent events. • When two events are mutually exclusive or disjoint, they cannot both happen together when the experiment is performed. Once the event B has occurred, event A cannot occur, so that  ( ) 0 ) P A B , or vice versa. The occurrence of event B certainly affects the probability that event A can occur. • Therefore, mutually exclusive events must be dependent. • When two events are mutually exclusive or disjoint,  ( ) ∩ 0 P A B and  1 ( ) ( ) ( ) ∪ P A B P A P B . • When two events are independent,  ( ) ( ) ( ) ∩ P A B P A P B , and  1 2 ( ) ( ) ( ) ( ) ( ) ∪ P A B P A P B P A P B . Need to Know… ? Using probability rules to calculate probabilities requires some experience and ingenuity. You need to express the event of interest as a union or intersection (or the combination of both) of two or more events whose probabilities are known or easily calculated. Often you can do this in different ways; the key is to find the right combination. E X A M P L E 4.23 Two cards are drawn from a deck of 52 cards. Calculate the probability that the draw includes an ace and a ten. Solution Consider the event of interest: A : Draw an ace and a ten Copyright 2020 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.