Business

Statistical Models

Statistical models are mathematical representations of relationships between variables. They are used to analyze and make predictions based on data. In business, statistical models can help in forecasting sales, identifying trends, and making data-driven decisions. These models can range from simple linear regression to more complex machine learning algorithms.

Written by Perlego with AI-assistance

9 Key excerpts on "Statistical Models"

  • Book cover image for: Practical Management Science
    This has several advantages. First, it enables managers to understand the problem better. In particular, the model helps to define the scope of the problem, the possible solutions, and the data requirements. Second, it allows analysts to use a variety of the mathematical solution procedures that have been developed over the past half century. These solution procedures are often computer-intensive, but with today’s cheap and abundant computing power, they are usually feasible. Finally, the modeling process itself, if done correctly, often helps to “sell” the solution to the people who must work with the system that is eventually implemented. In this introductory chapter, we begin by discussing a relatively simple example of a mathematical model. Then we discuss the distinction between modeling and a collection of models. Next, we discuss a seven-step modeling process that can be used, in essence if not in strict conformance, in most successful management science applications. Finally, we discuss why the study of management science is valuable, not only to large corporations, but also to students like you who are about to enter the business world. 1.2 A CAPITAL BUDGETING EXAMPLE As indicated earlier, a mathematical model is a set of mathematical relationships that rep-resent, or approximate, a real situation. Models that simply describe a situation are called descriptive models . Other models that suggest a desirable course of action are called optimi-zation models . To get started, consider the following simple example of a mathematical model. It begins as a descriptive model, but it then becomes an optimization model. A Descriptive Model A company faces capital budgeting decisions. (This type of model is discussed in detail in Chapter 6.) There are seven potential investments. Each has an investment cost and a Copyright 2019 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
  • Book cover image for: Introduction to Environmental Modeling
    In a wide spectrum of situations, government agencies legislate that certain models must be used in analyzing a particular problem type [e.g. see 206 ]. A prime reason for this situation is the threat of legal action positing that some phys-ical problem that develops in the future was unforeseen because a non-standard model was used. Whether or not such a proposition is true, the overhanging threat of legal challenges stifles the growth of new management tools, modeling and otherwise, and of the develop-ment of better ways to interact optimally with the environment. Unscrupulous touting of the merits of what is actually a scientifically deficient model only exacerbates this problem. Question 3.5 How might a conceptual model be useful in the development of a mathematical model? We now turn our attention to the two major classifications of mathematical models, prob-abilistic and deterministic. It is fair to say that at this time, those who develop and apply deterministic models form a separate camp from those who work with probabilistic meth-ods. By combining the elements of both approaches, opportunities for significant advances may materialize. Probabilistic models The objective of probabilistic modeling is to incorporate uncertainty and probability into mathematical descriptions. Two major subclasses of probabilistic models are referred to as Statistical Models and stochastic models. The distinction between these two designations, in brief, is that Statistical Models involve random elements that are unpredictable while stochastic models make use of probability to describe the random components. Statistical 36 Models as a Framework for Study of Data models, in essence, are concerned with errors. Errors in measurements, in posing incom-plete equations, or in simply reproducing data for a particular system. Theories exist that guide the development of experiments and data.
  • Book cover image for: Manufacturing and Enterprise
    eBook - ePub

    Manufacturing and Enterprise

    An Integrated Systems Approach

    • Adedeji B. Badiru, Oye Ibidapo-Obe, Babatunde J. Ayeni(Authors)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    A time series is a sequence of numerical data points in successive order. Generally, a time series is a sequence taken at successive equally spaced points in time. It is a sequence of discrete-time data. For example, in investment analysis, a time series tracks the movement of the chosen data points, such as a security’s price over a specified period of time with data points recorded at regular intervals. One of the main goals of time- series analysis is to forecast future values of the series. A trend is a regular, slowly evolving change in the series level. Changes can be modeled by low-order polynomials. There are three general classes of models that can be constructed for purposes of forecasting or policy analysis. Each involves a different degree of model complexity and presumes a different level of comprehension about the processes one is trying to model. In making a forecast, it is also important to provide a measure of how accurate one can expect the forecast to be. The use of intuitive methods usually precludes any quantitative measure of confidence in the resulting forecast. The statistical analysis of the individual relationships that make up a model, and of the model as a whole, makes it possible to attach a measure of confidence to the model’s forecasts.
    In time-series models, we presume to know nothing about the causality that affects the variable we are trying to forecast. Instead, we examine the past behavior of a time series in order to infer something about its future behavior. The method used to produce a forecast may involve the use of a simple deterministic model such as a linear extrapolation or the use of a complex stochastic model for adaptive forecasting. One example of the use of time-series analysis would be the simple extrapolation of a past trend in predicting population growth. Another example would be the development of a complex linear stochastic model for passenger loads on an airline. Time-series models have been used to forecast the demand for airline capacity, seasonal telephone demand, the movement of short-term interest rates, and other economic variables. Time-series models are particularly useful when little is known about the underlying process one is trying to forecast. The limited structure in time-series models makes them reliable only in the short run, but they are nonetheless rather useful.
    In regression models, the variable under study is explained by a single function (linear or non-linear) of a number of explanatory variables. The equation will often be time-dependent (i.e., the time index will appear explicitly in the model), so that one can predict the response over time of the variable under study. The main purpose of constructing regression models is forecasting. A forecast is a quantitative estimate (or set of estimates) about the likelihood of future events which is developed on the basis of past and current information. This information is embodied in the form of a model. This model can be in the form of a single-equation structural model, a multi-equation model or a time-series model. By extrapolating our models beyond the period over which they were estimated, we can make forecasts about near future events.
  • Book cover image for: Applying social science
    eBook - ePub

    Applying social science

    The role of social research in politics, policy and practice

    • David Byrne, Byrne, David(Authors)
    • 2011(Publication Date)
    • Policy Press
      (Publisher)
    Analysis ... assumes inter alia that the available data forms only a subset of all the data that might have been collected, and then attempts to use the information in the available data to make more general statements about either the larger set or about the mechanism that is producing the data. ... In order to make such statements, we need first to abstract the essence of the data-producing mechanism into a form that is amenable to mathematical and statistical treatment. Such a formulation will typically involve mathematical equations that express relationships between measured ‘variables’ and assumptions about the random processes that govern the outcome of individual measurements. That is the statistical model of the system. Fitting the model to a given set of data will then provide a framework for extrapolating the results to a wider context or for predicting future outcomes, and can often also lead to an explanation of the system. (1998, p ix)
    Statements relating to ‘the larger set that is producing the data’ are in essence statistical inference. That is to say, they are about what we can say about a population on the basis of a sample from it. Note that while conventional statistical approaches almost always deal with samples, with parts that stand for the whole, in the social sciences we often at least have data about all the cases in a population even if we do not have all possible measurements of those cases. Statements about the mechanism that is producing the data which can be used for generalisation and/or prediction are causal models. In order to predict outcomes we must have some means of relating measurements of elements, usually considered to be variables, which are causal to the outcome, and a description of how they are causal. That description is the model. We should note that despite the usual function of Statistical Models being to predict, some authorities balk at claiming predictive power. For example Everitt and Dunn were far more cautious, and logically correct, when they stated that: ‘In this text ... a model is considered to be a way of simplifying and portraying the structure in a set of data, and does not necessarily imply any causal mechanisms’ (1983, p 5).
  • Book cover image for: Statistical Thinking
    eBook - PDF

    Statistical Thinking

    Improving Business Performance

    • Roger W. Hoerl, Ronald D. Snee(Authors)
    • 2020(Publication Date)
    • Wiley
      (Publisher)
    PA R T THREE Formal Statistical Methods 275 C H A P T E R 7 Building and Using Models All models are wrong, but some are useful. —George E. P. Box M odel building is one of the methods used in the improvement frame- works discussed in Chapter 5. Model building is a broad topic, and in this chapter we will develop a basic understanding of the types and uses of models and develop functional capability to apply basic regression analysis to real problems. More extensive study and applications experience will be required to develop mastery of the entire field of regression analysis, or with other, more complex model building methods, such as machine learning. The power of statistical thinking is in developing process knowledge that can be used to manage and improve processes. The most effective way to cre- ate process knowledge is to develop a model that describes the behavior of the process. Webster’s New World College Dictionary defines model as “a generalized, hypothetical description, often based on an analogy, used in analyzing and explaining something.” In this chapter we learn how to integrate our frame- works and tools (see Chapters 5 and 6) and enhance them by building models. Model development is an iterative process in which we move back and forth between hypotheses about what process variables are important and data that confirm or disprove these hypotheses. Our overall strategy is as follows: We build a model that relates the process outputs (y’s) to input and process variables (x’s) that cause systematic behavior in y. That is, at a high level we consider that y = f(x), where “f” refers to some function of the input and process variables, which we collectively designate 276 ▸ S T A T I S T I C A L T H I N K I N G as “x.” This function quantifies the systematic variation in y.
  • Book cover image for: Simulating Business Processes for Descriptive, Predictive, and Prescriptive Analytics
    • Andrew Greasley(Author)
    • 2019(Publication Date)
    • De Gruyter
      (Publisher)
    These models are called explanatory models as they represent the real sys-tem and attempt to explain the behaviour that occurs. This means that the effect of a change on design of the process can be assessed by changing the structure of the model. These models generally have far smaller data needs than data-driven models because of the key role of the representation of structure. For example, we can repre-sent a supermarket by the customers that flow through the supermarket and the pro-cesses they undertake — collecting groceries and paying at the till. A model would then not only enable us to show current customer waiting time at the supermarket tills (descriptive analytics) but also allow us to change the design of the system such as changing the number of tills and predict the effect on customer waiting time (pre-dictive analytics). We can also specify the target customer waiting time based on the number of tills required (prescriptive analytics). However most real systems are very complex — a supermarket has many different staff undertaking many processes using different resources — for example, the collection and unpacking of goods, keeping shelves stocked, heating and ventilation systems, etc. It is usually not feasible to in-clude all the elements of the real system, so a key part of modeling is making choices about which parts of the system should be included in the model in order to obtain useful results. This simplification process may use statistics in the form of mathemat-ical equations to represent real-life processes (such as the customer arrival rate) and a computer program (algorithm) in the form of process logic to represent the se-quence of activities that occur within a process. Simulation for Descriptive, Predictive, and Prescriptive Analytics Simulation is not simply a predictive or even a prescriptive tool but can also be used in a descriptive mode to develop understanding.
  • Book cover image for: Statistical Thinking
    eBook - PDF

    Statistical Thinking

    Analyzing Data in an Uncertain World

    17 Practical Statistical Modeling In this chapter, you will see several examples of statistical analyses applied to real data. While the foregoing chapters have provided you with the basis for understanding how to analyze data, things often become messy when working with real data. In this chapter, we lay out an overall strategy for performing a statistical analysis and provide several exam- ples. The examples come from a broad range of domains, to help demonstrate the many kinds of questions that can be addressed using statistical modeling. We also introduce some more advanced analysis methods and concepts, which are necessary to properly an- alyze some of the datasets, including logistic regression and mixed-effects models, and we delve into greater detail about how to perform diagnostics on the results from statistical analyses, along with ways to address commonly encountered issues. Learning Objectives Having read this chapter, you should be able to • Determine the appropriate statistical model to test a particular scientific hypothesis. • Identify outliers and describe why they could be problematic for an analysis. • Determine whether a mixed-effects model is necessary in a particular situation. • Check the assumptions of the statistical model to ensure that they are not violated. The Process of Statistical Modeling There is a set of steps that we generally go through when we want to use our statistical model to test a scientific hypothesis: 1. Specify the question of interest. 2. Identify or collect the appropriate data. 3. Prepare and visualize the data. 4. Determine the appropriate model. 5. Fit the model to the data. 211 212 chapter 17 6. Perform diagnostics on the model to check assumptions. 7. Test hypotheses and quantify effect size. In each of the following examples, we will follow these steps to perform the analysis.
  • Book cover image for: Supply Chain Analytics and Modelling
    eBook - ePub

    Supply Chain Analytics and Modelling

    Quantitative Tools and Applications

    • Nicoleta Tipi(Author)
    • 2021(Publication Date)
    • Kogan Page
      (Publisher)
    As indicated in the previous chapters of this book, a number of commercial software packages are available and can be used to support the application of many of these models. Some of these software packages may have the capability to solve only one aspect of the problem, where others may allow for a more integrated approach. Some of these software packages may require advanced training for the analysts and users to create and operate the models created, which incurs additional costs.
    In many situations, the models discussed in this section can be implemented when using spreadsheet software. Some of the examples developed in this chapter use Microsoft Excel 2016 for Windows and Minitab software. It is assumed that the reader is familiar with some of the basic functions in Excel, however the functions used and the way these have been considered are detailed in each case.

    Descriptive models

    As discussed in earlier chapters, descriptive models are those looking to answer questions on what happened, why it happened and what is happening now. Data visualization models are critical in supporting the decision-making process and interpreting results. Visualizing data, and using descriptive analytics to understand the relationship between different variables in the data, will help an analyst, for example, to select the most appropriate forecasting model for prediction. Frequency distribution and histograms form part of the descriptive Statistical Models, however as these are visual tools, some descriptive models have been presented in the previous chapter (see Chapter 5 ).
    Descriptive models are characterized by the use of statistical analysis. A number of Statistical Models can be used in the field of business and management with particular application to the supply chain. Some of these are related in this section, and more examples can be found in Curwin and Slater (2008, 2004), Curwin et al (2013), Evans (2016), Kubiak and Benbow (2016) and Field (2018), to name just a few books in this field.
  • Book cover image for: Economic-Mathematical Methods and Models under Uncertainty
    These models reflect essential properties of a real object (process), though in fact the reality is more significant and richer. Therefore, today this section of economics science in principle devel-ops in three main directions of economic processes modeling: creation of analytic models, creation of Statistical Models, and development of theory of uncertainty in economic processes. Each of these directions has its ad-vantages and shortages. INTRODUCTION xvi Introduction Analytic and Statistical Models So, analytic models in economics are more rough take into account a few number of deficiencies, always require some assumptions and simplifica-tions. On the other hand, the results of calculations carried out by them are easily visible; they distinctly reflect principal regularities inherent to the phenomenon. And the chief, analytic models are more adjusted for searching optimal decisions, they can give qualitative and quantitative representation of the investigated economic event, are capable to predict the event, and also to suggest appropriate method on control of similar economic processes. But Statistical Models, compared with analytic models are more pre-cise and detailed, don’t require such rough assumptions, allow to take into account a greater number (in theory infinitely greater) of factors. But sta -tistical models also have some shortages including awkwardness, bad vis-ibility of the economic event, great rates of machine time, and the chief, extreme difficulty of search of optimal decisions that have to be found “to the touch” by guess and trial [1–11]. Thus, it is considerably difficult to comprehend the results of statistic modeling than the calculations on analytic models and appropriately, it is more difficult to optimize the solution (it must be “found by feeling” blindly). Tame combination of analytic and statistical methods in inves-tigation of operations is the matter of art, feeling and experience of the researcher.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.