This paper describes the results of a study designed to assess human expert ratings of educational concept features for use in automatic core concept extraction systems. Digital library resources provided the content base for human experts to annotate automatically extracted concepts on seven dimensions: coreness, local importance, topic, content, phrasing, structure, and function. The annotated concepts were used as training data to build a machine learning classifier as part of a tool used to predict the core concepts in the document. These predictions were compared with the experts' judgment of concept coreness.
Ped IFs were consistently less than the JIFs in which they were published and the Pain IFs, except for the British Journal of Anaesthesia 2005 in the latter case. The numbers of citations of pediatric anesthesia articles were greater in journals with greater IFs. The implications of subspecialty IFs warrant further consideration.
BACKGROUND:
Limited data exist regarding computational drug error rates in anesthesia residents and faculty. We investigated the frequency and magnitude of computational errors in a sample of anesthesia residents and faculty.
METHODS:
With institutional review board approval from 7 academic institutions in the United States, a 15-question computational test was distributed during rounds. Error rates and the magnitude of the errors were analyzed according to resident versus faculty, years of practice (or residency training), duration of sleep, type of question, and institution.
RESULTS:
A total of 371 completed the test: 209 residents and 162 faculty. Both groups committed 2 errors (median value) per test, for a mean error rate of 17.0%. Twenty percent of residents and 25% of faculty scored 100% correct answers. The error rate for postgraduate year 2 residents was less than for postgraduate year 1 (P = .012). The error rate for faculty increased with years of experience, with a weak correlation (R = 0.22; P = .007). The error rates were independent of the number of hours of sleep. The error rate for percentage-type questions was greater than for rate, dose, and ratio questions (P = .001). The error rates varied with the number of operations needed to calculate the answer (P < .001). The frequency of large errors (100-fold greater or less than the correct answer) by residents was twice that of faculty. Error rates varied among institutions ranged from 12% to 22% (P = .021).
CONCLUSIONS:
Anesthesiology residents and faculty erred frequently on a computational test, with junior residents and faculty with more experience committing errors more frequently. Residents committed more serious errors twice as frequently as faculty.
Research in analogical reasoning suggests that higher-order cognitive functions such as abstract reasoning, far transfer, and creativity are founded on recognizing structural similarities among relational systems. Here we integrate theories of analogy with the computational framework of reinforcement learning (RL). We propose a computational synergy between analogy and RL, in which analogical comparison provides the RL learning algorithm with a measure of relational similarity, and RL provides feedback signals that can drive analogical learning. Initial simulation results support the power of this approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.