Markov Chains
eBook - ePub

Markov Chains

From Theory to Implementation and Experimentation

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Markov Chains

From Theory to Implementation and Experimentation

About this book

A fascinating and instructive guide to Markov chains for experienced users and newcomers alike

This unique guide to Markov chains approaches the subject along the four convergent lines of mathematics, implementation, simulation, and experimentation. It introduces readers to the art of stochastic modeling, shows how to design computer implementations, and provides extensive worked examples with case studies.

Markov Chains: From Theory to Implementation and Experimentation begins with a general introduction to the history of probability theory in which the author uses quantifiable examples to illustrate how probability theory arrived at the concept of discrete-time and the Markov model from experiments involving independent variables. An introduction to simple stochastic matrices and transition probabilities is followed by a simulation of a two-state Markov chain. The notion of steady state is explored in connection with the long-run distribution behavior of the Markov chain. Predictions based on Markov chains with more than two states are examined, followed by a discussion of the notion of absorbing Markov chains. Also covered in detail are topics relating to the average time spent in a state, various chain configurations, and n-state Markov chain simulations used for verifying experiments involving various diagram configurations.

• Fascinating historical notes shed light on the key ideas that led to the development of the Markov model and its variants

• Various configurations of Markov Chains and their limitations are explored at length

• Numerous examples—from basic to complex—are presented in a comparative manner using a variety of color graphics

• All algorithms presented can be analyzed in either Visual Basic, Java Script, or PHP

• Designed to be useful to professional statisticians as well as readers without extensive knowledge of probability theory

Covering both the theory underlying the Markov model and an array of Markov chain implementations, within a common conceptual framework, Markov Chains: From Theory to Implementation and Experimentation is a stimulating introduction to and a valuable reference for those wishing to deepen their understanding of this extremely valuable statistical tool.

Paul A. Gagniuc, PhD, is Associate Professor at Polytechnic University of Bucharest, Romania. He obtained his MS and his PhD in genetics at the University of Bucharest. Dr. Gagniuc's work has been published in numerous high profile scientific journals, ranging from the Public Library of Science to BioMed Central and Nature journals. He is the recipient of several awards for exceptional scientific results and a highly active figure in the review process for different scientific areas.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Markov Chains by Paul A. Gagniuc in PDF and/or ePUB format, as well as other popular books in Mathematics & Probability & Statistics. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Wiley
Year
2017
Print ISBN
9781119387558
eBook ISBN
9781119387589

1
Historical Notes

1.1 Introduction

Probability is the measure of how likely a future event is. It is not by ā€œchanceā€ that most of the examples related to the understanding of probability are connected to objects like dices, cards, or coins. Historical events show that most primitive attempts on probability theory have the roots in gambling [1]. Given the implications that gambling had over time, particularly the social consequences of it, great efforts have been made to avoid or understand uncertainty. Historians have looked to Aristotle and beyond when searching for the origins of the probabilistic concepts [1]. The very first ideas for this fundamental principle may derive directly from Aristotle's Ethics, where the concept of ā€œjusticeā€ took new forms over time [1]. Later, the medieval poem De Vetula (ā€œOn the Old Womanā€) appeared around the year 1250. This poem is a long thirteenth-century elegiac comedy that contains first written references on gambling [2]. The non-poetic content of De Vetula makes references to the connection between the number of combinations and the expected frequency of a given total [2]. Gerolamo Cardano (1501–1575) has made the first written references in defining odds as the ratio of favorable to unfavorable outcomes [1]. In his Liber de Ludo Aleae (ā€œBook on Games of Chanceā€) published in 1564 or later, Cardano was among the first to approach probabilities in games of chance [1]. A few decades later, uncertain events related to gambling resulted in the well-known mathematical theory of probability formulated by Pierre de Fermat and Blaise Pascal (1654) [3]. Just 3 years later in 1657, Christian Huygens (1629–1695) published a dedicated book on probability theory related to problems associated with games of chance, entitled De Ratiociniis in Ludo Aleae (ā€œOn Reasoning in Games of Chanceā€) [4, 5]. A milestone contribution of Jakob Bernoulli (1654–1705) in probability theory was published post-mortem in 1713, under the title Ars conjectandi (ā€œThe Art of Conjecturingā€) [6, 7]. Bernoulli was concerned with predicting the probability of unknown events [7]. In his work Bernoulli describes what today is known as the weak law of large numbers [7]. This law shows that the average of the results obtained from an increasing number of trials should converge toward the expected value [7]. For instance, consider the flipping of an unbiased coin. As the number of flips goes to infinity, the proportion of heads will approach 1/2. Let us consider another example: without our knowledge, x black balls and y white balls are placed in a jar (Figure 1.1a). To determine the proportions of black balls and white balls from the jar by experiment, a series of random draws must be performed. Whenever a black ball or a white ball is drawn, the observation is noted. The expected value of white versus black observations will converge toward the real ratio from the jar as the number of extractions increases. Therefore, Bernoulli proved that after many random draws (trials), the observations of white balls versus black balls will converge toward the real ratio of black balls versus white balls from the jar. Almost 100 years after Bernoulli, Pierre de Laplace (1749–1827) severs the thinking monopoly that gambling had on the probability theory [1–7]. In 1812, Laplace publishes the ThĆ©orie Analytique des ProbabilitĆ©s (ā€œAnalytical Theory of Probabilityā€) in which it introduces probability to general sciences [8, 9].

image


Described by caption
Figure 1.1 From Bernoulli model to the Markov model. (a) The Bernoulli model with a single jar filled with different proportions of black and white balls. The curved arrows show from which jar the extraction was done and to which jar the next draw will be made. Black curved arrows show the extraction and observation of black balls whereas white curved arrows show the extraction and observation of white balls. (b) Two Bernoulli jars. A white jar and a black jar, each filled with different proportions of black and white balls. Here, draws are still independent from one another. (c–f) Shows how dependence occurs between the two jars if the color of the curved arrows is ā€œattractedā€ to the color of the jars. (g–j) Shows how the two jars system is transformed into a Markov diagram by changing the angle of viewing of the jars, from the side view to the top view.

1.2 On the Wings of Dependent Variables

Another milestone in probability theory was made almost 100 years later by Andrei Markov (1856–1922) [10, 11]. In the Bernoulli model, the outcome of previous events does not change the outcome of current or future events. Today it is obvious that the events are not independent in many cases; however, in the past, this was not that obvious (in the mathematical sense). The term of dependent events, or dependent variables, refers to those situations when the probability of an event is conditioned by the events that took place in the past. A colleague of Markov, namely Pavel Nekrasov, assumed that independence is a condition for the law of large numbers [12]. Following a dispute with Pavel Nekrasov, Markov considered that the law of large numbers can be also valid in the case of dependent variables. In 1906, Markov extends Bernoulli's results to dependent variables and began developing his reasoning about chains of linked probabilities (Figures 1.1a–j) [10]. Markov's connection with Bernoulli it is indirectly but deeply rooted in the history of the Academy of Sciences in St. Petersburg. Prior to Markov's time, the Academy included none other than the great Leonhard Euler (1707–1783) and the sons of Jakob Bernoulli, namely Nicholas Bernoulli (1687–1759) and Daniel Bernoulli (1700–1782) [12]. In 1907, Markov proved that the independence of random variables was not a required condition for the validity of the weak law of large numbers and the central limit theorem [10, 11]. For his demonstration, he envisioned a virtual machine (Figures 1.1f and 1.1j).
Let us consider two jars which represent the two states of a machine (Figure 1.1b). One is painted in black (state 1) and the other is painted in white (state 0). Both contain certain proportions of both white and black balls. First, an...

Table of contents

  1. Cover
  2. Title page
  3. Copyright
  4. Dedication
  5. Abstract
  6. Preface
  7. Acknowledgments
  8. About the Companion Website
  9. 1 Historical Notes
  10. 2 From Observation to Simulation
  11. 3 Building the Stochastic Matrix
  12. 4 Predictions Using Two-State Markov Chains
  13. 5 Predictions Using n-State Markov Chains
  14. 6 Absorbing Markov Chains
  15. 7 The Average Time Spent in Each State
  16. 8 Discussions on Different Configurations of Chains
  17. 9 The Simulation of an n-State Markov Chain
  18. A Supporting Algorithms in PHP
  19. B Supporting Algorithms in JavaScript
  20. C Syntax Equivalence between Languages
  21. Glossary
  22. References
  23. Index
  24. EULA