# Structural Equations with Latent Variables

## Kenneth A. Bollen

- English
- ePUB (mobile friendly)
- Available on iOS & Android

# Structural Equations with Latent Variables

## Kenneth A. Bollen

## About This Book

Analysis of Ordinal Categorical Data Alan Agresti Statistical Science Now has its first coordinated manual of methods for analyzing ordered categorical data. This book discusses specialized models that, unlike standard methods underlying nominal categorical data, efficiently use the information on ordering. It begins with an introduction to basic descriptive and inferential methods for categorical data, and then gives thorough coverage of the most current developments, such as loglinear and logit models for ordinal data. Special emphasis is placed on interpretation and application of methods and contains an integrated comparison of the available strategies for analyzing ordinal data. This is a case study work with illuminating examples taken from across the wide spectrum of ordinal categorical applications. 1984 (0 471-89055-3) 287 pp. Regression Diagnostics Identifying Influential Data and Sources of Collinearity David A. Belsley, Edwin Kuh and Roy E. Welsch This book provides the practicing statistician and econometrician with new tools for assessing the quality and reliability of regression estimates. Diagnostic techniques are developed that aid in the systematic location of data points that are either unusual or inordinately influential; measure the presence and intensity of collinear relations among the regression data and help to identify the variables involved in each; and pinpoint the estimated coefficients that are potentially most adversely affected. The primary emphasis of these contributions is on diagnostics, but suggestions for remedial action are given and illustrated. 1980 (0 471-05856-4) 292 pp. Applied Regression Analysis Second Edition Norman Draper and Harry Smith Featuring a significant expansion of material reflecting recent advances, here is a complete and up-to-date introduction to the fundamentals of regression analysis, focusing on understanding the latest concepts and applications of these methods. The authors thoroughly explore the fitting and checking of both linear and nonlinear regression models, using small or large data sets and pocket or high-speed computing equipment. Features added to this Second Edition include the practical implications of linear regression; the Durbin-Watson test for serial correlation; families of transformations; inverse, ridge, latent root and robust regression; and nonlinear growth models. Includes many new exercises and worked examples. 1981 (0 471-02995-5) 709 pp.

## Frequently asked questions

## Information

# CHAPTER ONE

# Introduction

*individual observations.*In multiple regression or ANOVA (analysis of variance), for instance, we learn that the regression coefficients or the error variance estimates derive from the minimization of the sum of squared differences of the predicted and observed dependent variable for each case. Residual analyses display discrepancies between fitted and observed values for every member of the sample.

*covariances*rather than cases.

^{1}Instead of minimizing functions of observed and predicted individual values, we minimize the difference between the sample covariances and the covariances predicted by the model. The observed covariances minus the predicted covariances form the residuals. The fundamental hypothesis for these structural equation procedures is that the covariance matrix of the observed variables is a function of a set of parameters. If the model were correct and if we knew the parameters, the population covariance matrix would be exactly reproduced. Much of this book is about the equation that formalizes this fundamental hypothesis:

*y*=

*γx +*ζ, where

*γ*(gamma) is the regression coefficient, ζ (zeta) is the disturbance variable uncorrelated with

*x*and the expected value of ζ,

**(ζ), is zero.The**

*E**y, x,*and ζ are random variables. This model in terms of (1.1) is

^{2}

*γ,*VAR(x), and VAR(ζ) as parameters. The equation implies that each element on the left-hand side equals its corresponding element on the right-hand side. For example,

*COV*(

*x, y*)

*=*

*γ*VAR(

*x*) and VAR(

*γ*) =

*y*VAR(

^{2}*x*) + VAR(ζ). I could modify this example to create a multiple regression by adding explanatory variables, or I could add equations and other variables to make it a simultaneous equations system such as that developed in classical econo- metrics. Both cases can be represented as special cases of equation (1.1), as I show in Chapter 4.

*x*and

_{l}*x*that are indicators of a factor (or latent random variable) called ξ (xi). The dependence of the variables on the factor is

_{2},*x*ζ + δ

_{1}=_{1}and x

_{2}= ξ + δ

_{2,}where (delta) and δ

_{2}are random disturbance terms, uncorrelated with ξ and with each other, and

*E*(

*δ*)

_{1}*= E*(

*δ*) = 0. Equation (1.1) specializes to

_{2}*(phi) is the variance of the latent factor ξ. Here θ consists of three elements:*

*ϕ**, VAR(δ*

*ϕ*_{X}), and VAR(δ

_{2}). The covariance matrix of the observed variables is a function of these three parameters. I could add more indicators and more latent factors, allow for coefficients ("factor loadings") relating the observed variables to the factors, and allow correlated disturbances creating an extremely general factor analysis model. As Chapter 7 demonstrates, this is a special case of the covariance structure equation (1.1).

*y = γξ + ζ,*where unlike the previous regression the independent random variable is unobserved. The last two equations are identical to the factor analysis example:

*x*= ξ+ δ

_{1}_{1}and

*x*ξ+ δ

_{2}=_{2}. I assume that ζ, and δ

_{2}are

^{}uncorrected with ξ and with each other, and that each has an expected value of zero. The resulting structural equation system is a combination of factor analysis and regression-type models, but it is still a specialization of (1.1):

*linear equations.*By linear, i mean that the relations between all variables, latent and observed, can be represented in linear structural equations or they can be transformed to linear forms.

^{3}Structural equations that are nonlinear in the parameters are excluded. Nonlinear functions of parameters are, however, common in the

*covariance*structure equation, ∑ = ∑(θ). For instance, the last example had three linear structural equations:

*y = γξ +*ξ,

*x*= ξ +

_{1}*δ*

_{1}, and

*x*

_{2}= ξ + δ_{2}. Each is linear in the variables and parameters. Yet the covariance structure (1.4) for this model shows that COV(

*x*

_{1;}

*y*)

*= γ*which means that the COV(

*ϕ*,*x*

_{1,}y) is a nonlinear function of

*γ*and

*ϕ*. Thus it is the structural equations linking the observed, latent, and disturbance variables that are linear, and not necessarily the covariance structure equations.

# HISTORICAL BACKGROUND

*Journal of Marketing Research,*and the May–June 1983 issue of the

*Journal of Econometrics.*

_{1}, and δ

_{2}are uncorrelated with each other and with ξ. Straight single-headed arrows represent one-way causal influences from the variable at the arrow base to the variable to which the arrow points. The implicit coefficients of one for the effects of ξ on

*x*and

_{1}*x*are made explicit in the diagram.

_{2}^{4}