
- 315 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Statistics for Long-Memory Processes
About this book
Statistical Methods for Long Term Memory Processes covers the diverse statistical methods and applications for data with long-range dependence. Presenting material that previously appeared only in journals, the author provides a concise and effective overview of probabilistic foundations, statistical methods, and applications. The material emphasizes basic principles and practical applications and provides an integrated perspective of both theory and practice. This book explores data sets from a wide range of disciplines, such as hydrology, climatology, telecommunications engineering, and high-precision physical measurement. The data sets are conveniently compiled in the index, and this allows readers to view statistical approaches in a practical context.
Statistical Methods for Long Term Memory Processes also supplies S-PLUS programs for the major methods discussed. This feature allows the practitioner to apply long memory processes in daily data analysis. For newcomers to the area, the first three chapters provide the basic knowledge necessary for understanding the remainder of the material. To promote selective reading, the author presents the chapters independently. Combining essential methodologies with real-life applications, this outstanding volume is and indispensable reference for statisticians and scientists who analyze data with long-range dependence.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Statistics for Long-Memory Processes by Jan Beran in PDF and/or ePUB format, as well as other popular books in Mathematics & Probability & Statistics. We have over one million books available in our catalogue for you to explore.
Information
CHAPTER 1
Introduction
1.1 An elementary result in statistics
One of the main results taught in an introductory course in statistics is: The variance of the sample mean is equal to the variance of one observation divided by the sample size. In other words, if X1,...,Xn are observations with common mean μ = E(Xi) and variance σ2 = var(Xi) = E[(Xi – μ)2], then the variance of is equal to
A second elementary result one learns is: The population mean is estimated by , and for large enough samples the (1 – α)–confidence interval for μ is given by
if σ2 is known and
if σ2 has to be estimated. Here is the sample variance and is the upper quantile of the standard normal distribution.
Frequently, the assumptions that lead to (1.1), (1.2), and (1.3) are mentioned only briefly. The formulas are very simple and can even be calculated by hand. It is therefore tempting to use them in an automatic way, without checking the assumptions under which they were derived. How reliable are these formulas really in practical applications? In particular, is (1.1) always exact or at least a good approximation to the actual variance of ? Is the probability that (1.2) and (1.3) respectively contain the true value μ always equal to or at least approximately equal to 1 – α?
In order to answer these questions one needs to analyze some typical data sets carefully under this aspect. Before doing that (in Section 1.4), it is useful to think about the conditions that lead to (1.1), (1.2) and (1.3), and about why these rules might or might not be good approximations.
Suppose that X1, X2,..., Xn are observations sampled randomly from the same population at time points i = 1, 2, ..., n. Thus, X1,..., Xn are random variables with the same (marginal) distribution F. The index i does not necessarily denote time. More generally, i can denote any other natural ordering, such as for example, the position on a line in a plane.
Consider first equation (1.1). A simple set of conditions under which (1.1) is true can be given as follows:
- The population mean μ = E(Xi) exists and is finite.
- The population variance σ2 = var(Xi) exists and is finite.
- X1,..., Xn are uncorrelated, i.e.,
where
is the autocorrelation between Xi and Xj, and
is the autocovariance between Xi and Xj.
The questions one needs to answer are:
- How realistic are these assumptions?
- If one or more of these assumption does not hold, to what extent are (1.1), (1.2), and (1.3) wrong and how can they be corrected?
The first two assumptions depend on the marginal population distribution F only. Here, our main concern is assumption 3. Unless specified otherwise, we therefore assume throughout the book that the first two assumptions hold. The situation involving infinite variance and/or mean is discussed briefly in Chapter 11.
Let us now consider assumption 3. In some cases this assumption is believed to be plausible a priori. In other cases, one tends to believe that the dependence between the observations is so weak that it is negligible for all practical purposes. In particular, in experimental situations one often hopes to force observations to be at least approximately independent, by planning the experiment very carefully. Unfortunately, there is ample practical evidence that this wish does not always become a reality (see, e.g., Sections 1.4 and 1.5). A typical example is the series of standard weight measurements by the US National Bureau of Standards, which is discussed in Sections 1.4 and 7.3. This example illustrates that non-negligible persisting correlations may occur, in spite of all precautions. The reasons for such correlations are not always obvious. Some possible “physical” explanations are discussed in Section 1.3 (see also Secti...
Table of contents
- Cover
- Title Page
- Copyright Page
- Table of Contents
- Preface
- 1 Introduction
- 2 Stationary processes with long memory
- 3 Limit theorems
- 4 Estimation of long memory: heuristic approaches
- 5 Estimation of long memory: time domain MLE
- 6 Estimation of long memory: frequency domair MLE
- 7 Robust estimation of long memory
- 8 Estimation of location and scale, forecasting
- 9 Regression
- 10 Goodness of fit tests and related topics
- 11 Miscellaneous topics
- 12 Programs and data sets
- Bibliography
- Author index
- Subject index