The models studied in Section 1.2, Section 1.3, and Chapter 2 can be characterized by the fact that the true values of the observed variables satisfy a single linear equation. The models fell into two classes; those with an error in the equation and those with only measurement errors. In the models containing an error in the equation (e.g., the models of Sections 1.2 and 2.2), one variable could be identified as the "dependent" or y variable. In models with only measurement error (e.g., the models of Sections 1.3 and 2.3), the variables entered the model in a symmetric manner from a distributional point of view, but we generally chose to identify one variable as the y variable.In this chapter we extend our treatment of measurement error models with no error in the equation to the situation in which the true variables satisfy more than one linear relation. The extension of the model with an error in the equation to the multivariate case is relatively straightforward and is not discussed.
THE CLASSICAL MULTIVARIATE MODELThe model introduced in Section 1.3 assumes that independent information on the covariance matrix of the measurement errors is available and that the only source of error is measurement error. This section is devoted to multivariate models of that type.
Maximum Likelihood EstimationWe derive the maximum likelihood estimators for two models: the model with fixed x, and the model with random x,. One representation for the fixed 292 Measurement Error Models where {x,] is a fixed sequence, e, = (er, u,), Z, = (Y,, X,) is observed, y, is an r-dimensional row vector, x, is a k-dimensional row vector, z, = (y,, x,) is a p-dimensional row vector, /I is a k x r matrix of unknown coefficients, andThe maximum likelihood estimators for the case in which c,, = reEU2 (4.1.2) and r,, is known are derived in Theorem 4.1.1 and are direct extensions of the estimators of Theorem 2.3.1. The estimator of /? is expressed as a function of the characteristic vectors of M,, in the metric C,,, where the vectors are defined in (4.A.21) and (4.A.22) of Appendix 4.A.Theorem 4.1.1. Let model (4.1.1) hold. Assume that Cvu and M,, are positive definite. Then the maximum likelihood estimators are / j = -B kr B-1 rr 7 2, = Z,(I -BBT,,) = Z, -?,YLIYu,, where M,, = n-' I:= ZlZ,, p -P is the rank of Y,, 0, = Zl(Ir, -$)', 6; = ( p -[)-' f x i ,(4.1.3) i = k + I --(P,,, Pu,) = vr, -hr,,[(t, -&I!, 1~1, B,i are the characteristic vectors of MZz in the metric re,, B.i, i = 1,2,. . . , r, are the columns of B = (B;,, B k r y , and AP < 1,-< --< Xp-r+ are the r smallest roots of . . A A . . IM,, -Al',cl = 0.Proof. The proof parallels the proof of Theorem 2.3.1. To simplify the proof we assume T,, to be nonsingular, but the conclusion is true for singular Tee. Twice the logarithm of the likelihood for a sample of n observations is n 2 log L = -n I O~(~X~, , U~( -c 2 1 (z, -z,)rl1(zrz,)'. (4.1.4) r = 1 294 MULTIVARIATE MODELS The zi that maximizes (4.1.4) for a given /? is 2; = (B, Iky[(B, IkIy; '(b, Iky]-' @, IklYL lz; = z; -rEec(c'r,c)-lcz;, ...