Technology & Engineering
Multiple Regression
Multiple regression is a statistical technique used to analyze the relationship between a dependent variable and two or more independent variables. It extends simple linear regression by allowing for the consideration of multiple predictors simultaneously. This method is commonly employed in technology and engineering to model complex relationships and make predictions based on multiple input factors.
Written by Perlego with AI-assistance
Related key terms
1 of 5
7 Key excerpts on "Multiple Regression"
- Robert Ho(Author)
- 2013(Publication Date)
- Chapman and Hall/CRC(Publisher)
293 14 Multiple Regression 14.1 Aim Multiple Regression is a statistical technique through which one can analyze the relationship between a dependent or criterion variable and a set of inde-pendent or predictor variables. As a statistical tool, Multiple Regression is often used to accomplish three objectives. 1. To find the best prediction equation for a set of variables, that is, given X and Y (the predictors), what is Z (the criterion variable)? 2. To control for confounding factors in order to assess the contribution of a specific variable or set of variables, that is, identifying indepen-dent relationships. 3. To find structural relationships and provide explanations for seem-ingly complex multivariate relationships, such as is done in path analysis. 14.2 Multiple Regression Techniques There are three major Multiple Regression techniques: standard Multiple Regression , hierarchical regression , and statistical (stepwise) regression . They differ in terms of how the overlapping variability owing to correlated independent variables is handled, and who determines the order of entry of independent variables into the equation (Tabachnick and Fidell, 2001). 14.2.1 Standard Multiple Regression For this regression model, all the study’s independent variables are entered into the regression equation at a time. Each independent variable is then assessed in terms of the unique amount of variance it accounts for. The 294 Handbook of Univariate and Multivariate Data Analysis with IBM SPSS disadvantage of the standard regression model is that it is possible for an independent variable to be strongly related to the dependent variable, and still be considered an unimportant predictor, if its unique contribution in explaining the dependent variable is small. 14.2.2 Hierarchical Multiple Regression This regression model is more flexible as it allows the researcher to specify the order of entry of the independent variables in the regression equation.- eBook - PDF
- Prem S. Mann(Author)
- 2020(Publication Date)
- Wiley(Publisher)
603 Blend Images - Hill Street Studios/Brand X Pictures/Getty Images 14.1 Multiple Regression Analysis 14.2 Assumptions of the Multiple Regression Model 14.3 Standard Deviation of Errors 14.4 Coefficient of Multiple Determination 14.5 Computer Solution of Multiple Regression Multiple Regression CHAPTER 14 14.1 Multiple Regression Analysis LEARNING OBJECTIVES After completing this section, you should be able to: • Explain the difference between a simple linear regression model and a multiple linear regression model. • Explain a first-order Multiple Regression model. • Explain the meaning of the partial regression coefficients in a multiple linear regression model. • Determine the degrees of freedom used to make inferences about a single parameter in a multiple linear regression model. The simple linear regression model discussed in Chapter 13 was writ- ten as y = A + B x + ϵ This model includes one independent variable, which is denoted by x, and one dependent variable, which is denoted by y. As we know from Chapter 13, the term represented by ϵ in the above model is called the random error. In Chapter 13, we discussed simple linear regression and linear correlation. A simple regression model includes one independent and one dependent variable, and it presents a very simplified scenario of real‐world situations. In the real world, a dependent variable is usually influenced by a number of independent variables. For example, the sales of a company’s product may be determined by the price of that product, the quality of the product, and advertising expenditure incurred by the company to promote that product. Therefore, it makes more sense to use a regression model that includes more than one independent variable. Such a model is called a Multiple Regression model. In this chapter we will discuss Multiple Regression models. 604 CHAPTER 14 Multiple Regression Usually a dependent variable is affected by more than one independent variable. - eBook - PDF
Biostatistics
Basic Concepts and Methodology for the Health Sciences, 10th Edition International Student Version
- Wayne W. Daniel, Chad L. Cross(Authors)
- 2014(Publication Date)
- Wiley(Publisher)
It is not unusual to find researchers investigating the relationships among a dozen or more variables. For those who have access to a computer, the decision as to how many variables to include in an analysis is based not on the complexity and length of the computations but on such considerations as their meaningfulness, the cost of their inclusion, and the importance of their contribution. In this chapter we follow closely the sequence of the previous chapter. The regression model is considered first, followed by a discussion of the correlation model. In considering the regression model, the following points are covered: a description of the model, methods for obtaining the regression equation, evaluation of the equation, and the uses that may be made of the equation. In both models the possible inferential procedures and their underlying assumptions are discussed. 10.2 THE MULTIPLE LINEAR REGRESSION MODEL In the Multiple Regression model we assume that a linear relationship exists between some variable Y , which we call the dependent variable, and k independent variables, X 1 ; X 2 ; . . . ; X k . The independent variables are sometimes referred to as explanatory variables, because of their use in explaining the variation in Y . They are also called predictor variables, because of their use in predicting Y . 490 CHAPTER 10 STATISTICAL INFERENCE AND THE RELATIONSHIPS AMONG THREE OR MORE VARIABLES Assumptions The assumptions underlying Multiple Regression analysis are as follows. 1. The X i are nonrandom (fixed) variables. This assumption distinguishes the Multiple Regression model from the multiple correlation model, which will be presented in Section 10.6. This condition indicates that any inferences that are drawn from sample data apply only to the set of X values observed and not to some larger collection of X’s. Under the regression model, correlation analysis is not meaningful. - eBook - PDF
Applying Statistics in the Courtroom
A New Approach for Attorneys and Expert Witnesses
- Philip Good(Author)
- 2001(Publication Date)
- Chapman and Hall/CRC(Publisher)
175 Chapter 12 Multiple Regression Statistical analysis is perhaps the prime example of those areas of technical wilderness into which judicial expeditions are best limited to ascertaining the lay of the land. 1 There is some argumentation in the briefs about the relative merits of multiple linear regression and logistic fitting analysis. Neither side, however, either in the briefs or at the oral argument, bothered to explain, in intelligible terms or otherwise, what these terms mean. 2 In Chapter 10, we saw that the courts distrust arguments predicated on the presence or absence of a single factor. In the last chapter, we noted the improvements that could be obtained by taking advantage of the detailed relationships among variables via regression analysis. In this chapter, we consider regression methods for assessing the relative contri-butions made by multiple factors in cases involving earnings, housing, and employment. We also consider actual and potential counter argu-ments. Concepts introduced include collinear and partially dependent variables; goodness-of-fit; linear, nonlinear, and logistic regression; and cohort analysis. 1 Appalachian Power Co. v. EPA , 135 F.3d 791 (D.C. Cir. 1998). 2 Craig v. Minnesota State University Board , 731 F.2d 465, 476, fn. 14 (8th Cir. 1984). 176 Applying Statistics in the Courtroom 12.1 Lost Earnings Very few cases involve only one explanatory variable. Suppose, in the example considered in Section 11.3.1, the supplier’s attorney had responded that while the wholesaler and similarly situated companies may have done well in the pre-conspiracy period, the entire industry subsequently suffered reverses. Consequently, the attorney proposes the following formula for earnings determination in which industry sales are incorporated: Earnings = $27,520 + $11.7 ∗ Year + 0.0014 ∗ $Industry_Sales As before, earnings is the dependent variable, and year and industry sales are the independent variables. - David Anderson, Dennis Sweeney, Thomas Williams, Jeffrey Camm(Authors)
- 2020(Publication Date)
- Cengage Learning EMEA(Publisher)
Multiple coefficient of determination A measure of the goodness of fit of the estimated Multiple Regression equation. It can be interpreted as the proportion of the variability in the dependent variable that is explained by the estimated regression equation. Multiple Regression analysis Regression analysis involving two or more independent variables. Multiple Regression equation The mathematical equation relating the expected value or mean value of the dependent variable to the values of the independent variables; that is, E ( y ) 5 b 0 1 b 1 x 1 1 b 2 x 2 1 . . . 1 b p x p . Multiple Regression model The mathematical equation that describes how the dependent variable y is related to the independent variables x 1 , x 2 , . . . , x p and an error term e . K E Y F O R M U L A S Multiple Regression Model y 5 b 0 1 b 1 x 1 1 b 2 x 2 1 Á 1 b p x p 1 e (15.1) Multiple Regression Equation E ( y ) 5 b 0 1 b 1 x 1 1 b 2 x 2 1 Á 1 b p x p (15.2) Estimated Multiple Regression Equation y / 5 b 0 1 b 1 x 1 1 b 2 x 2 1 Á 1 b p x p (15.3) Least Squares Criterion min o ( y i 2 y / i ) 2 (15.4) Relationship Among SST, SSR, and SSE SST 5 SSR 1 SSE (15.7) Multiple Coefficient of Determination R 2 5 SSR SST (15.8) Adjusted Multiple Coefficient of Determination R 2 a 5 1 2 s 1 2 R 2 d n 2 1 n 2 p 2 1 (15.9) Mean Square Due to Regression MSR 5 SSR p (15.12) Mean Square Due to Error MSE 5 SSE n 2 p 2 1 (15.13) Copyright 2020 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.- eBook - PDF
Handbook of Regression and Modeling
Applications for the Clinical and Pharmaceutical Industries
- Daryl S. Paulson(Author)
- 2006(Publication Date)
- Chapman and Hall/CRC(Publisher)
4 Multiple Linear Regression Multiple linear regression is a direct extension of simple linear regression. In simple linear regression models, only one x predictor variable is present, but in multiple linear regression, there are k predictor values, x i , x 2 , . . . , x k . For example, a two-variable predictor model is presented in the following equation: Y i ¼ b 0 þ b 1 x i 1 þ b 2 x i 2 þ « i , (4 : 1) where b 1 is the i th regression slope constant acting on x i 1 ; b 2 , the i th regres-sion slope constant acting on x i 2 ; x i 1 , the i th x value in the first x predictor; x i 2 , the i th x value in the second x predictor; and « i is the i th error term. A four-variable predictor model is presented in the following equation: Y i ¼ b 0 þ b 1 x i 1 þ b 2 x i 2 þ b 3 x i 3 þ b 4 x i 4 þ « i : (4 : 2) We can plot two x i predictors (a three-dimensional model), but not beyond. A three-dimensional regression function is not a line, but it is a plane (Figure 4.1). All the E [ Y ] or ^ y values fit on that plane. Greater than two x i predictors move us into four-dimensional space and beyond. As in Chapter 2, we continue to predict y i via ^ y i , but now, relative to multiple x i variables. The residual value, e i , continues to be the difference between y i and ^ y i . REGRESSION COEFFICIENTS For the model ^ y ¼ b 0 þ b 1 x 1 þ b 2 x 2 , the b 0 value continues to be the point on the y axis where x 1 and x 2 ¼ 0, but other than that, it has no meaning independent of the b i values. The slope constant b 1 represents the change in the mean response value ^ y when x 2 is held constant. Likewise for b 2 , when x 1 is held constant. The b i coefficients are linear, but the predictor x i values need not be. 153 Multiple Regression ASSUMPTIONS The multiple linear response variables ( y i s) are assumed statistically inde-pendent of one another. - Roger E Millsap, Alberto Maydeu-Olivares, Roger E Millsap, Alberto Maydeu-Olivares(Authors)
- 2009(Publication Date)
- SAGE Publications Ltd(Publisher)
13 Applications of Multiple Regression in Psychological Research Razia Azen and David Budescu THE REGRESSION MODEL History and introduction The regression model was conceptualized in the late nineteenth century by Sir Francis Galton, who was studying how characteristics are inherited from one generation to the next (e.g., Stanton, 2001; Stigler, 1997). Galton’s goal was to model and predict the characteristics of offspring based on the characteristics of their parents. The term ‘regression’ came from the observation that extreme values (or outliers) in one gener-ation produced offspring that were closer to the mean in the next generation; hence, ‘regression to the mean’ occurred (the original terminology used was regression to ‘medi-ocrity’). Galton also recognized that previous generations (older than the parents) could influence the characteristics of the offspring as well, and this led him to conceptualize the multiple-regression model. His colleague, Karl Pearson, formalized the mathematics of regression models (e.g., Stanton, 2001). The multiple-regression (MR) model involves one criterion (also referred to as response, predicted, outcome or dependent) variable, Y, and p predictor (also referred to as independent) 1 variables, X 1 , X 2 , . . ., X p . The MR model expresses Y i , the observed value of the criterion for the i th case, as a linear composite of the predictors and a residual term: Y i = β 0 + β 1 X 1 i + β 2 X 2 i + . . . + β p X pi + ε i (1) Here, X 1 i , X 2 i , . . . , X pi are the values observed on the p predictors for the i th case, and the various β s ( β 0 , β 1 , β 2 , . . . , β p ) are the (unknown) regression coefficients associated with the various predictors. The first coefficient, β 0 , is an intercept term (or a coefficient associated with a predictor that takes on the value X 0 = 1 for all observations). If all the variables (response and predictors) are standardized to have zero mean and unit
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.






