Chapter 1
Discrete Event Computer Simulation
Monte Carlo Method describes a technique of solving stochastic problems through experimentation with random numbers. This method can be traced back to physical experiments the French naturalist G. L. L. Buffon used in 1773 to estimate π. However, the American statistician E. L. De Forest may have been the first to use this technique in 1876 with random numbers (see Gentle (1985), p. 612). An early and well-known use of the Monte Carlo Method was by W. S. Gosset who, publishing under the pseudonym “Student,” used the method to bolster his faith in the t-distribution in 1908; prior to this the t-distribution had been developed by “theory” that was at best not rigorous. Although the Monte Carlo Method may have originated in 1876, it was not until about 75 years later that S. Ulam and J. von Neumann gave it the name Monte Carlo Method (see Ulam (1976) for an account). The reason for the time lapse was the inapplicability of the method in many important problems until the advent of the digital computer, which was developed between 1946 and 1952 at such institutions as the University of Pennsylvania, Massachusetts Institute of Technology, National Bureau of Standards, and International Business Machines Corporation. The modern stored-program computer made feasible the voluminous calculations required by the Monte Carlo Method.
In comparison to today’s computers, early computers were slow and had limited memory. For example, the number of arithmetic operations per second (often called floating point operations per second, or flops) that could be performed was below 10,000 in the early 1960s, about 500,000 in the mid-1960s, 20,000,000 in the early 1970s, a billion in the early 1990s, and on the supercomputers of today it exceeds a trillion. Indeed, Moore’s Law, articulated by Gordon Moore of the Intel Corporation in 1965, asserts that computing power doubles every 1.5 years (see U.S. News & World Report (1997), pp. 64-65). While growth in certain areas may have been faster in the past and physical limits may slow growth in the future, Moore’s Law implies a ten-fold increase in computing power every five years, to perhaps 100 trillion flops by the year 2008. Some estimate an even faster growth: 10 trillion flops in 2000 and 100 trillon flops in 2004!
In addition, programming was done in machine or assembly language until about 1955 because higher level languages such as FORTRAN were not yet available (the first integrated circuit was invented at Texas Instruments in 1958); special-purpose simulation languages did not become available until about a decade later. The early uses of the Monte Carlo Method, which is now known as simulation, concentrated on programming techniques, since debugging and running a program was the most arduous task of developing a simulation. The limitations of the early computers often forced oversimplifications of the problem; without such simplifications, programs would not run in feasible computer time or at feasible cost. Often, important issues such as what program runs to make and how to analyze program output were ignored.
We are entering the 21st century with a half-century of development of computer simulation, it is now possible for a text to provide a theoretical basis for simulation methodology, details of an important simulation language, and the integration of these elements as they are brought to bear on a meaningful case study. In this book Chapters 1, 3, 4, an...