Computer Science
Monte Carlo Methods
Monte Carlo methods are a computational technique used to solve problems through random sampling. They are particularly useful for estimating complex mathematical expressions or simulating systems with many variables. By generating a large number of random samples, Monte Carlo methods provide approximate solutions to problems that may be difficult or impossible to solve using traditional deterministic algorithms.
Written by Perlego with AI-assistance
Related key terms
1 of 5
12 Key excerpts on "Monte Carlo Methods"
- eBook - ePub
Data Science and Machine Learning
Mathematical and Statistical Methods
- Dirk P. Kroese, Zdravko Botev, Thomas Taimre, Radislav Vaisman(Authors)
- 2019(Publication Date)
- Chapman and Hall/CRC(Publisher)
CHAPTER 3MONTE CARLO METHODSMany algorithms in machine learning and data science make use of Monte Carlo techniques. This chapter gives an introduction to the three main uses of Monte Carlo simulation: to (1) simulate random objects and processes in order to observe their behavior, (2) estimate numerical quantities by repeated sampling, and (3) solve complicated optimization problems through randomized algorithms.3.1 Introduction
Briefly put, Monte Carlo simulation is the generation of random data by means of a computer. These data could arise from simple models, such as those described in Chapter 2 , or from very complicated models describing real-life systems, such as the positions of vehicles on a complex road network, or the evolution of security prices in the stock market. In many cases, Monte Carlo simulation simply involves random sampling from certain probability distributions. The idea is to repeat the random experiment that is described by the model many times to obtain a large quantity of data that can be used to answer questions about the model. The three main uses of Monte Carlo simulation are:Monte Carlo simulation- Sampling. Here the objective is to gather information about a random object by observing many realizations of it. For instance, this could be a random process that mimics the behavior of some real-life system such as a production line or telecommunications network. Another usage is found in Bayesian statistics, where Markov chains are often used to sample from a posterior distribution.
☞ 48
- Estimation. In this case the emphasis is on estimating certain numerical quantities related to a simulation model. An example is the evaluation of multidimensional integrals via Monte Carlo techniques. This is achieved by writing the integral as the expectation of a random variable, which is then approximated by the sample mean. Appealing to the Law of Large Numbers guarantees that this approximation will eventually converge when the sample size becomes large.
☞ 448
- eBook - ePub
- Philip Dutre, Philippe Bekaert, Kavita Bala(Authors)
- 2018(Publication Date)
- A K Peters/CRC Press(Publisher)
3 Monte Carlo MethodsThis chapter introduces the concept of Monte Carlo integration and reviews some basic concepts in probability theory. We also present techniques to create better distributions of samples. More details on Monte Carlo Methods can be found in Kalos and Whitlock [86 ], Hammersley and Handscomb [62 ], and Spanier and Gelbard [183 ]. References on quasi–Monte Carlo Methods include Niederreiter [132 ].3.1 Brief HistoryThe term “Monte Carlo” was coined in the 1940s, at the advent of electronic computing, to describe mathematical techniques that use statistical sampling to simulate phenomena or evaluate values of functions. These techniques were originally devised to simulate neutron transport by scientists such as Stanislaw Ulam, John von Neumann, and Nicholas Metropolis, among others, who were working on the development of nuclear weapons. However, early examples of computations that can be defined as Monte Carlo exist, though without the use of computers to draw samples. One of the earliest documented examples of a Monte Carlo computation was done by Comte de Buffon in 1677. He conducted an experiment in which a needle of length L was thrown at random on a horizontal plane with lines drawn at a distance d apart (d > L). He repeated the experiment many times to estimate the probability P that the needle would intersect one of these lines. He also analytically evaluated P asP =.2 Lπ dLaplace later suggested that this technique of repeated experimentation could be used to compute an estimated value of π. Kalos and Whitlock [86 ] present early examples of Monte Carlo Methods. Why Are Monte Carlo Techniques Useful?3.2 Why Are Monte Carlo Techniques Useful?Consider a problem that must be solved, for example, computing the value of the integration of a function with respect to an appropriately defined measure over a domain. The Monte Carlo approach to solving this problem would be to define a random variable such that the expected value of that random variable would be the solution to the problem. Samples of this random variable are then drawn and averaged to compute an estimate of the expected value of the random variable. This estimated expected value is an approximation to the solution of the problem we originally wanted to solve. - Brian L Hammond, William A Lester, P J Reynolds(Authors)
- 1994(Publication Date)
- World Scientific(Publisher)
Chapter 1 Introduction to Monte Carlo Methods Monte Carlo Methods are a class of techniques that can be used to simulate the behavior of a physical or mathematical system. They are distinguished from other simulation methods such as molecular dynamics, by being stochastic, that is, non-deterministic in some manner. This stochastic behavior in Monte Carlo Methods generally results from the use of random number sequences. Although it might not be surprising that such an analysis can be used to model random processes, Monte Carlo Methods are capable of much more. A classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions. The use to which we will apply Monte Carlo is the solution of the well-known partial differential equation, the Schrodinger equation. Monte Carlo Methods are frequently applied in the study of systems with a large number of strongly coupled degrees of freedom. Examples include liquids, disordered materials, and strongly coupled solids. Unlike ideal gases or perfectly ordered crystals, these systems do not simplify readily. The many degrees of freedom present are not separable, making a simulation method, such molecular dynamics or Monte Carlo, a wise choice. Furthermore, use of Monte Carlo is advantageous for evaluating high 1 2 Chapter 1 / Introduction to Monte Carlo Methods dimensional integrals, where grid methods become inefficient due to the rapid increase of the number of grid points with dimensionality. Monte Carlo also can be used to simulate many classes of equations that are difficult to solve by standard analytical and numerical methods. In this chapter we introduce various aspects of statistics and simulation germane to the Monte Carlo solution of the Schrodinger equation. We begin with a discussion of random and pseudorandom numbers in Sec.- No longer available |Learn more
- Darren Walker(Author)
- 2022(Publication Date)
- Mercury Learning and Information(Publisher)
8 Monte Carlo MethodsMonte Carlo Methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain a numerical result. They are often used in physical and mathematical problems when it is impossible to obtain an analytical solution, and the application of a direct algorithm is infeasible. Monte Carlo Methods are mainly used in three distinct problems: numerical integration, simulation, and optimization. The first two in this list and how they relate to physics problems are discussed in this chapter.8.1 MONTE CARLO INTEGRATION8.1.1 Dart Throwing“Hit and miss” integration, also known as the shooting method, is arguably the most intuitive type of Monte Carlo method to understand. To demonstrate the application of this approach, let us discuss a novel way of approximating the value for (see Figure 8.1 ). It shows the upper right quadrant of a circle of unit radius circumscribed by a unit square. Imagine throwing darts randomly at this board (some of you may have had a similar experience already in the student’s union bar). Of the total number of darts that hit within the square, the fraction of those that land within the circle will be approximately equal to the ratio area of the circle contained by the square. Mathematically, we write.(8.1)Here we have the constraint that darts cannot be thrown outside of the square and is the area contained in the unit square.FIGURE 8.1: The Monte Carlo “dart board” used to approximate .Remembering your geometry basics, we can substitute and rearrange the equation above to give an approximation formula for , such that(8.2)In other words, the probability that a dart will hit the shaded area is equivalent to one-quarter of the value of . Despite the fun you can have in trying to make the dart-throwing random, attempting to physically perform this experiment soon becomes tedious as you need a large number of thrown darts to get a reasonably accurate approximation for - eBook - PDF
Handbook in Monte Carlo Simulation
Applications in Financial Engineering, Risk Management, and Economics
- Paolo Brandimarte(Author)
- 2014(Publication Date)
- Wiley(Publisher)
In this introductory chapter, we consider first the historical roots of Monte Carlo; we will see in Section 1.1 that some early Monte Carlo Methods were actually aimed at solving deterministic problems. Then, in Section 1.2 we com-pare Monte Carlo sampling and Monte Carlo simulation, showing their deep relationship. Typical simulations deal with dynamic systems evolving in time, and there are three essential kinds of dynamic models: • Continuous-time models • Discrete-time models • Discrete-event models These model classes are introduced in Section 1.3, where we also illustrate how their nature affects the mechanics of Monte Carlo simulation. In this book, a rather relevant role is played by applications involving optimization. This may sound odd to readers who associate simulation with performance evaluation; on the contrary, there is a multiway interaction between optimization and Monte Carlo Methods, which is outlined in Section 1.4. In this book we illustrate a rather wide range of applications, which may suggest the idea that Monte Carlo Methods are almost a panacea. Unfortunately, this power may hide many pitfalls and dangers. In Section 1.5 we aim at making the reader aware of some of these traps. Finally, in Section 1.6 we list a few software tools that are commonly used to implement Monte Carlo simulations, justifying the choice of R as the language of this book, and in Section 1.7 we list prerequisites and references for readers who may need a refresher on some background material. 1.1 Historical origin of Monte Carlo simulation Monte Carlo Methods involve random sampling, but the actual aim is to estimate a deterministic quantity. Indeed, a well-known and early use of Monte Carlo- 1.1 Historical origin of Monte Carlo simulation 5 FIGURE 1.1 An illustration of Buffon's needle. like methods is Buffon's needle approach to estimate ir. - Ralf Korn, Elke Korn, Gerald Kroisandt(Authors)
- 2010(Publication Date)
- CRC Press(Publisher)
Chapter 3 The Monte Carlo Method: Basic Principles 3.1 Introduction The main idea of the Monte Carlo method is to approximate an expected value E ( X ) by an arithmetic average of the results of a big number of inde-pendent experiments which all have the same distribution as X . The basis of this method is one of the most celebrated results of probability theory, the strong law of large numbers. As expected values play a central role in various areas of applications of probabilistic modelling, the Monte Carlo method has a widespread use. Examples of such areas of application are the analysis and design of queueing systems (such as in supermarkets or in large factories), the design of evacuation schemes for buildings, the analysis of the reliability of technical systems, the design of telecommunication networks, the estimation of risks of investments or of insurance portfolios, just to name a few. Historically, the Monte Carlo method dates back to 1949 when the article “The Monte Carlo Method” by Metropolis and Ulam appeared in the Journal of the American Statistical Association . However, it was already developed during World War II. J. von Neumann and S. Ulam are commonly regarded as the founders of the Monte Carlo method. The name Monte Carlo method should indicate that one uses a sort of gambling to obtain an approximation procedure. Nowadays one performs no physical gambling in the Monte Carlo method. The outcomes of the independent experiments, needed to perform the method, are replaced by suitable random numbers that are generated by a computer. As the amount of random numbers has to be very high to ensure that the Monte Carlo estimate is close to the exact expected value, the method tends to be quite slow when applied in its crude form. As the Monte Carlo esti-mator is a random variable, each run of it typically produces new values. As the estimator is unbiased, the variance of the estimator is a measure for its accuracy.- eBook - PDF
- Frederic Magoules, Jie Pan, Kiat-An Tan, Abhinit Kumar(Authors)
- 2009(Publication Date)
- CRC Press(Publisher)
Chapter 7 Monte Carlo Method 7.1 Introduction The Monte Carlo method is widely used in many areas of scientific re-search. From computational physics to fluid dynamics, this method has seen exponential growth with the advent of computationally powerful computers. Indeed, the Monte Carlo method is of great interest for solving systems with unknown analytical solutions. In the real world, more often than not, straight-forward analytical solutions are not readily available. Hence, empirical mod-eling and numerical simulations are much sought to better understand the physical problems involved. While in the past such modeling and numerical simulations were not very accurate, ongoing research and advances in com-putational power have led to more and more sophisticated and high quality models to better approximate the physical problems. Despite that the compu-tational power has grown exponentially over the years, this power is not able to keep up with the ever increasing demands of the improved model developed by researchers. Hence, the advent of grid systems provides the industry with a powerful tool to tap the resources offered by parallel computer networks. Such networks have theoretically a limitless amount of computational power. So far, there is a great tendency for the industry to adopt grid solutions. 7.2 Fundamentals of the Monte Carlo Method The Monte Carlo method first saw its application in the computation of the number π . To derive the numerical value of π , one possibility is to calculate numerically the area of a circle A circle of radius r , and then to deduce the value of the number π using the relation A circle = πr 2 . To calculate the area of the circle, we start by filling up a square, of width 2 r to contain the circle, with N points distributed randomly. At each point, we take note of its position with respect to the circle. If the point falls within the circle, we group it under the set C . - eBook - PDF
- Melville Jr. Clark(Author)
- 2012(Publication Date)
- Academic Press(Publisher)
V I THE MONTE CARLO METHOD 6.1 Introduction The Monte Carlo method is a statistical method for solving deter-ministic or statistical problems. Statistical estimates are found for quan-tities of interest. The estimate is obtained by the repetitive playing of a game. The game played is an analog of the physical problem of interest. The game is specified by a set of deterministic rules related to and sets of probabilities governing the occurrences of physical phenomena of interest. A very simple example will illustrate the nature of the Monte Carlo method. Consider the problem of determining whether or not a con-ventional die is unbiased, i.e., whether or not it is loaded in favor of one or more faces. A physical determination of any bias could be made by measurements of various types. For instance, measuring the sides to see if the die was a true cube, locating the center of mass, and measuring the principal moments of inertia. The measurements lead to a deter-ministic answer regarding the bias or lack of bias. An alternative proce-dure is empirical in nature. Suppose the die is rolled many times and the occurrence of various faces in the upright position recorded. A statistical determination of the probability of obtaining any particular face will permit an analysis of any bias in the die. This second procedure could be termed a Monte Carlo study. The above example is trivial in theory but serves to illustrate the conceptual simplicity of the Monte Carlo method. There are several points to consider in the example which are universal to all Monte Carlo studies. First, although the problem has a deterministic solution, a statistical procedure was adopted which consisted of the repetitive playing of a game. The game was so constructed that the desired result could be found. In the example, the game to be played is straightforward. 239 240 V I . T H E M O N T E C A R L O M E T H O D In more complicated problems the analog to be constructed is more complex. - eBook - PDF
- Reuven Y. Rubinstein(Author)
- 2009(Publication Date)
- Wiley-Interscience(Publisher)
Chapter 3 deals with sampling from various distribu- tions. The Monte Carlo method is now the most powerful and commonly used technique for analyzing complex problems. Applications can be found in many fields from radiation transport to river basin modeling. Recently, the range of applications has been broadening, and the complexity and com- putational effort required has been increasing, because realism is associ- ated with more complex and extensive problem descriptions. Finally, we mention some differences between the Monte Carlo method and simulation: 1 in the Monte Carlo method time does not play as substantial a role as it does in stochastic simulation. 2 The observations in the Monte Carlo method, as a rule, are indepen- dent. In simulation, however, we experiment with the model over time so, as a rule, the observations are serially correlated. In the Monte Carlo method it is possible to express the response as a rather simple function of the stochastic input variates. In simulation the response is usually a very complicated one and can be expressed explicitly only by the computer program itseif. 3 1.4 A MACHINE SHOP EXAMPLE This example is quoted from Gordon [ 11, pp. 570-5731. For better under- standing of the example an important distinction to be made is whether an entity is permanent or temporary. Permanent entities can be compactly and efficiently represented in tables, while temporary entities will be volatile records and are usually handled by the list processing technique described later. A MACHINE SHOP EXAMPLE 13 Consider a simple machine shop (or a single stage in the manufacturing process of a more complex machine shop). The shop is to machine five types of parts. The parts arrive at random intervals and are distributed randody among the different types. There are three machines, a11 equally able to machine any part. If a machine is available at the time a part arrives, machining begins immediately. - eBook - ePub
An Introduction to Statistical Computing
A Simulation-based Approach
- Jochen Voss(Author)
- 2013(Publication Date)
- Wiley(Publisher)
3 Monte Carlo MethodsSo far, in the first two chapters of this book, we have learned how to simulate statistical models on a computer. In this and the following two chapters we will discuss how such simulations can be used to study properties of the underlying statistical model. In this chapter we will concentrate on the approach to directly generate a large number of samples from the given model. The idea is then that the samples reflect the statistical behaviour of the model; questions about the model can then be answered by studying statistical properties of the samples. The resulting methods are called Monte Carlo Methods .3.1 Studying models via simulationWhen studying statistical models, analytical calculations often are only possible under assumptions such as independence of samples, normality of samples or large sample size. For this reason, many problems occurring in ‘real life’ situations are only approximately covered by the available analytical results. This chapter presents an alternative approach to such problems, based on estimates derived from computer simulations instead of analytical calculations.The fundamental observation underlying the methods discussed in this and the following chapters is the following: if we can simulate a statistical model on a computer, then we can generate a large set of samples from the model and then we can learn about the behaviour of the model by studying the computer-generated set of samples instead of the model itself. We give three examples for this approach: - eBook - PDF
Simulating Copulas: Stochastic Models, Sampling Algorithms, And Applications
Stochastic Models, Sampling Algorithms and Applications
- Matthias Scherer, Jan-frederik Mai(Authors)
- 2012(Publication Date)
- ICP(Publisher)
Chapter 7 The Monte Carlo Method This chapter was contributed by Elke Korn and Ralf Korn , 1 for which we would like to thank them very much. Much more on the subject matter is to be found in their recent monograph Korn et al. (2010). 7.1 First Aspects of the Monte Carlo Method The technical term Monte Carlo method is used for a great variety of sub-jects, methods, and applications in many areas. They range from high-dimensional numerical integration via the calculation of success probabil-ities in games of chance to the simulation of complicated phenomena in nature. However, they are all based on the approximation of an expecta-tion of a random variable X by the arithmetic mean of i.i.d. realizations of X , i.e. the relation E [ X ] ≈ 1 n n X i =1 X i =: ¯ X n . (7.1) Here, X is a real-valued random variable with finite expectation, n a posi-tive (sufficiently large) integer, and the X i are independent realizations of random variables with the same distribution as X . We call this type of approximation the crude Monte Carlo method or simply the Monte Carlo method , ¯ X n the crude Monte Carlo estimator (for short: CMC). It has been widely agreed to name J. von Neumann and S. Ulam as the inventors of the Monte Carlo method. The method was secretly developed and used during World War II. The first publication presenting it to an (academic) audience was Metropolis and Ulam (1949). The name “Monte 1 Fachbereich Mathematik, Technische Universit¨at Kaiserslautern, Erwin Schr¨ odinger Straße, 67663 Kaiserslautern, Germany, [email protected] . 251 252 Simulating Copulas: Stochastic Models, Sampling Algorithms, and Applications Carlo method” should indicate that one uses a sort of gambling to obtain an approximation procedure. Moreover, prior to the invention of pseudo-random numbers, one sometimes used reported tables of roulette outcomes as a source of i.i.d. - eBook - PDF
- Ian H. Hutchinson(Author)
- 2015(Publication Date)
- Cambridge University Press(Publisher)
11 Monte Carlo techniques So far we have been focussing on how particle codes work once the particles are launched. We’ve talked about how they are moved, and how self-consistent forces on them are calculated. What we have not addressed is how they are launched in an appropriate way in the first place, and how particles are reinjected into a simulation. We’ve also not explained how one decides statistically whether a collision has taken place to any particle and how one would then decide what scattering angle the collision corresponds to. All of this must be determined in computational physics and engineering by the use of random numbers and statistical distributions. 1 Techniques based on random numbers are called by the name of the famous casino at Monte Carlo. 11.1 Probability and statistics 11.1.1 Probability and probability distribution Probability, in the mathematically precise sense, is an idealization of the repetition of a measurement, or a sample, or some other test. The result in each individual case is supposed to be unpredictable to some extent, but the repeated tests show some average trends that it is the job of probability to represent. So, for example, the single toss of a coin gives an unpredictable result: heads or tails; but the repeated toss of a (fair) coin gives on average equal numbers of heads and tails. Probability theory describes that regularity by saying the prob-ability of heads and tails is equal. Generally, the probability of a particular class of outcomes (e.g. heads) is defined as the fraction of the outcomes , in a very 1 S. Brandt (2014), Data Analysis Statistical and Computational Methods for Scientists and Engineers , fourth edition, Springer, New York, gives a much more expansive introduction to statistics and Monte Carlo techniques. 144 11.1 Probability and statistics 145 large number of tests, that are in the particular class.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.











