Despite the proliferation of longitudinal trauma research, careful attention to timing of assessments is often lacking. Patterns in timing of assessments, alternative time structures, and the treatment of time as an outcome are discussed and illustrated using trauma data.Over the past two decades, there has been increasing attention to the importance of prospective designs and longitudinal analytic strategies in the study of psychological trauma. As noted elsewhere (King, Vogt, & King, 2004), early calls for longitudinal methods in trauma research were offered by Green, Lindy, and Grace (1985), Denny, Rabinowitz, and Penk (1987) and Keane, Wolfe, and Taylor (1987). In a later review of the validity of causal inference in trauma research, King and King (1991) endorsed a lifespan perspective and proposed the use of thenavailable longitudinal methods to understand better the course of mental health sequelae to trauma exposure. To date, many major trauma research teams have incorporated repeated assessments of trauma victims postexposure and/or following treatment. For the most part, researchers have employed ordinary least squares regression, in which one's standing on a variable on one occasion (e.g., a trait assessed prior to exposure; severity of exposure or coping style assessed shortly after the trauma) predicts one's standing on outcomes at one or more later occasions. Prior status on the outcome may or may not be "controlled for" in predicting later status (see Gollob & Reichhardt, 1987, for cautions).Additionally, a few researchers have applied autoregressive models to evaluate the effect of one variable on change in another variable, with logic rooted in the cross-lagged panel design, in which the same variables are assessed on two or more occasions. Typically, the research question is the directionality of influence between or among the several variables in the crosslagged model: To what extent does one variable cause change in the other? As examples, see trauma research studies by King et al. (2000), Erickson, Wolfe, King, King, and Sharkansky (2001), Schell, Marshall, and Jaycox (2004), and King, Taft, King, Hammond, and Stone (in press). With the cross-lagged panel design, change is the deviation of one's observed outcome score on a later occasion from that predicted by one's earlier standing on that variable.A more direct yet controversial approach to assessing change is a simple difference between one's standing on a variable at a former occasion subtracted from one's standing on that variable at a later occasion. There have been decades of concern over the reliability of difference scores e.g., Cronbach & Furby, 1970;Humphreys, 1996), mostly owing to the assumption of equal dispersion of score distributions from which the difference is calculated. A stream of important research (e.g., Nesselroade & Cable, 1974;Sharma & Gupta, 1986;Williams & Zimmerman, 1996), however, has demonstrated that the concern may not be well founded. Therefore, a reliable direct assessment of change as a simple ...