This study investigated the similarity of information that is provided by direct and indirect methods of writing assessment. The skills required by each of these techniques provide a framework for a cognitive model of writing skills from which these procedures can be compared. It is suggested that practitioners interested in reliably measuring all aspects of the proposed writing process continuum, as characterized by this cognitive model, use both indirect and direct methods.An issue in ability and achievement testing that has gained increased attention in recent psychometric literature is the relative value of information gained from objective (multiple-choice) and freeresponse (including essay) testing methodologies.Critics of objective testing claim that the information that can be gained from multiple-choice tests is limited relative to that gained from freeresponse options. Others claim that the scoring reliability and validity of essay and other free-response formats is so poor as to outweigh any such advantage.Research on the comparability of the construct validity of the two general classes of measures has been somewhat sparse and varied in terms of conclusions. Early work comparing multiple-choice tests with constructed-response tests (Davis & Fifer, 1959;Heim & Wats, 1967; Vemon, 1962) generally indicated that tests employing different formats cannot be expected to have the same means, standard deviations, and correlations with criterion variables. Some of these differences can be assumed to be due to changes in the scale of measurement and amount of error variance associated with each format. Thus the results of this earlier work do not necessarily imply lack of construct equivalence. Traub and Fisher (1977) recognized these problems and employed methodology that equated scale parameters and error variances on three response formats for both verbal and quantitative measures. Two of these formats were multiple-choice and constructed-response. Using confirmatory factor analysis ~c~',~)9 Traub and Fisher found little evidence of a format effect for the mathematical reasoning items, and only weak evidence that the freeresponse and items were measuring different constructs for verbal comprehension items.Ward, Frederiksen, and Carlson ( 19~0) also using CFA, compared machine-scored and constructed-response forms of a test of ability to formulate scientific hypotheses. While their data were somewhat restricted and their analysis was more concerned with correlations of the resulting scores with personality and other cognitive variables, the Ward et al. results indicated that the two formats measure, at least in part, different constructs. In an additional study, Ward (1982) concluded that at Purdue University on June 18, 2015 apm.sagepub.com Downloaded from 118 for verbal aptitude items, various item formats produce much the same information and are essentially equivalent in terms of both the technical adequacy of the resulting measures and the construct interpretations of the resulting scores.With the except...