Highlights of the history of the American Documentation Institute (ADI) (1937‐1967) are sketched, giving background on the concepts and aspirations of Watson Davis (1896‐1967), founder of ADI and one of the first Americans to become interested in documentation as a separate field of endeavor. Davis organized ADI as a service organization, concentrating primarily on offering microfilming services. Since 1952 ADI has grown in importance as a professional organization, gradually developing to a state of financial self‐sufficiency, intellectual influence, and benefaction to its members. These developments are traced for the most part in chronologial order, sometimes with the use of footnotes to bring up to date matters introduced in the text at the time of their earliest importance.
The original aim of this study was to obtain objective data bearing on the much argued question of whether author indexing is “good.” Author indexing of 285 documents reporting biomedical research was scored by comparing the author‐supplied terms (author set) for each paper with a criterion set of terms that was established by asking a group of 12 potential users to describe the same document. Terms in the document title (title set) were scored similarly. The average author set contained almost half of all the terms employed by more than one member of the user group and scored 73% of the maximal possible score, as compared with 44% for the average title set. When judged by the method and criterion employed here, author indexing is substantially better than indexing derived from document titles. The findings suggest that indicia supplied by an author should serve scientists in biomedical disciplines other than his own about as well as they serve his disciplinary colleagues. The general method developed for measuring indexing quality may represent a practical yardstick of wide applicability.
A study of the indexing terms of 10,000 documents was conducted for the main purpose of analyzing the frequency of use of the indexing terms, both singly and in combination with one another. The serial number of each document and the indexing terms for it were typed onto magnetic tape using a Remington Rand Unityper magnetic‐tape typewriter, and this information was analyzed by a Univac I computer; the sequence of computer operations is given in a flow chart. The statistical results, presented graphically, include (1) the number of descriptors used to describe the content of a reference, (2) the distribution of use of the total vocabulary as contained in the dictionary (authority list) for the system, and (3) the frequency of occurrence of singles, pairs, and triples in describing individual documents. The data obtained through this analysis are informative, but not sufficient to fully evaluate an information system. Full evaluation would require analyzing the actual output of the system in terms of the desired output.
Persons performing research in the field of information retrieval need to be concerned with likenesses as well as differences in various automated systems. This paper uses the same computer routines to compare two dictionaries of different sizes, having some similar features and some different features. It compares the use of the dictionaries in building their input file, and discusses the frequency of occurrence of descriptors and their combining power, with an analysis of the data generated.
A generalized method is given for performing information retrieval by computer. The method can be applied to systems varying in type, size, manner of use, and computer equipment. Systems with files of information‐retrieval data or document references may use the method. Options allow many or few searches to be performed during each pass of the file through the computer. Alternative searches or subsearches may be specified, and the upper and lower bounds on the number of retrieval references may be set. The options can be used to define the characteristics of a particular system so that necessary staff and equipment capacities can be determined. Output forms include reference numbers, abstracts, microfilm, and so forth, depending on the output equipment chosen. In addition to retrieval information, the output may include system‐use records, used to analyze and evaluate system efficiency automatically and to compare objectively one system with another. A block process diagram, a detailed flow chart for application to a computer, and a description of the application are provided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.