In this paper major factors interacting with readability measures in validity studies are identified and described. The basis for the analysis presented are thirty-six experimental studies of the effect of readability variables upon reader comprehension and/or retention. Emphasis is placed on the interacting nature of the factors involved.
Individuals are frequently called upon to judge the readability of written text. The accuracy of such judgments, studies show, ranges from high to low. This paper provides another look at the problem, based upon the judgments of 56 professional writers on five passages of text taken from a reading test. The judges were asked to rank the five passages from most readable to least readable. The results showed wide variability in the judgments. Only a few of the judges were able individually to put the passages in the tested order of readability, but the consensus of the entire group put them in exactly that order. Further examination of the results suggested that a relatively small number of gross errors in judgment were made. Accuracy of judgments, it appeared, might greatly increase with selection and/or training of judges, a procedure followed in certain studies where highly accurate judgments had been found. A readability formula was suggested as an accurate and convenient way of getting readability scores under most circumstances. Use of a formula might also, it was suggested, help a judge to increase his accuracy, but human interpretation of the scores was still felt to be needed.
A retrospective look shows earlier advice still relevant to both predicting and producing readable writing. For prediction, refined readability formulas with stronger criterion passages and updated familiar-word lists have appeared, although the computerization of readability tests sometimes encourages misapplying or misinterpreting them when screening text. For production, attention to sentence construction, word characteristics, and information density remains relevant to both drafting and revising computer documentation for readability, especially since reading speed and reader preference often interact with comprehension in practical settings. I.7.5 Document analysis-human factors
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.