Evaluating texts is an important activity associated with teaching statistics. Surprisingly, the statistical education literature offers little guidance on how these evaluations should be conducted. This lack of guidance may be at least partly responsible for the fact that published evaluations of statistics texts almost invariably employ evaluation criteria that lack any theory-based rationale. This failing is typically compounded by a lack of empirical evidence supporting the usefulness of the criteria. This article describes the construction and piloting of instruments for evaluating statistics texts that are grounded in the statistical education and text evaluation literatures. The study is an initial step in a line of research which we hope will result in the establishment and maintenance of a database of evaluations of statistical texts. Evaluative information of this kind should assist instructors wrestling with text selection decisions and individuals charged with performing evaluations, such as journal reviewers, and should ultimately benefit the direct consumers of these texts—the students.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.