Although computer-based test interpretation (CBTI) systems have been operational for nearly 25 years, their availability and adoption in routine clinical practice have grown exponentially in recent years. This article addresses methodological considerations in CBTI validation studies, emphasizing those design issues relevant to customer satisfaction studies. Specifically, issues of response bias are addressed as they relate to selection of raters and test respondents, use of random reports as a "control" for spurious ratings of report validity, and both the format and content of ratings. Deficiencies of various studies from the research literature are reviewed, and advantages and limitations of design alternatives are discussed.Both the availability and adoption of computer-based test interpretation (CBTI) systems have grown exponentially over the last decade. Krug (1984) listed over 200 computer software applications of psychological testing, and most experts predict the growth of computer-based assessment products to continue (Butcher, 1987b;Lanyon, 1987;Mitchell & Kramer, 1985). The potential for CBTIs to facilitate assessment is without question. Research findings generally indicate that "well-designed statistical treatment of test results and ancillary information will yield more valid assessments than will an individual professional using the same information" (American Psychological Association, 1986, p. 13). Computerized interpretive narratives, when developed on a broad actuarial foundation of empirical findings relating test indices to relevant external criteria, offer distinct advantages, including (a) economy of processing and more effective use of professional resources; (b) accuracy and consistency of scoring and implementation of interpretive decision rules; (c) virtually unlimited capacity for storage, indexing, and retrieval of relevant information from the clinical and research literature regarding test-behavior relationships; (d) ability to subject test indicators to complex, configural analyses; and (e) potential for automated collection and analysis of extensive normative data bases (Jackson, 1985;Krug, 1987).However, the proliferation of computer-based assessment has also generated considerable controversy (BersofF, 1989