A general approach for item selection in adaptive multiple-category classification tests is provided. The approach uses mutual information (MI), a special case of the KullbackLeibler distance, or relative entropy. MI works efficiently with the sequential probability ratio test and alleviates the difficulties encountered with using other local-and globalinformation measures in the multiple-category classification setting. Results from simulation studies using three item selection methods, Fisher information (FI), posteriorweighted FI (FIP), and MI, are provided for an adaptive four-category classification test. Both across and within the four classification categories, it is shown that in general, MI item selection classifies the highest proportion of examinees correctly and yields the shortest test lengths. The next best performance is observed for FIP item selection, followed by FI.
Citations are commonly held to represent scientific impact. To date, however, there is no empirical evidence in support of this postulate that is central to research assessment exercises and Science of Science studies. Here, we report on the first empirical verification of the degree to which citation numbers represent scientific impact as it is actually perceived by experts in their respective field. We run a large-scale survey of about 2, 000 corresponding authors who performed a pairwise impact assessment task across more than 20, 000 scientific articles. Results of the survey show that citation data and perceived impact do not align well, unless one properly accounts for strong psychological biases that affect the opinions of experts with respect to their own papers vs. those of others. First, researchers tend to largely prefer their own publications to the most cited papers in their field of research. Second, there is only a mild positive correlation between the number of citations of topcited papers in given research areas and expert preference in pairwise comparisons. This also applies to pairs of papers with several orders of magnitude differences in their total number of accumulated citations. However, when researchers were asked to choose among pairs of their own papers, thus eliminating the bias favouring one's own papers over those of others, they did systematically prefer the most cited article. We conclude that, when scientists have full information and are making unbiased choices, expert opinion on impact is congruent with citation numbers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.