Abstract. How to assess the performance of machine learning algorithms is a problem of increasing interest and urgency as the data mining application of myriad algorithms grows. The standard approach of employing predictive accuracy has, we argue rightly, been losing favor in the AI community. The alternative of cost-sensitive metrics provides a far better approach, given the availability of useful cost functions. For situations where no useful cost function can be found we need other alternatives to predictive accuracy. We propose that information-theoretic reward functions be applied. The first such proposal for assessing specifically machine learning algorithms was made by Kononenko and Bratko [1]. Here we improve upon our alternative Bayesian metric [2], which provides a fair betting assessment of any machine learner. We include an empirical analysis of various Bayesian classification learners, ranging from Naive Bayes learners to causal discovery algorithms.
A metric of causal power can assist in developing and using causal Bayesian networks. We introduce a metric based upon information theory. We show that it generalizes prior metrics restricted to linear and noisy-or models, while providing a metric appropriate to the full representational power of Bayesian nets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.