Automatic learning of Bayesian networks from data is a challenging task, particularly when the data are scarce and the problem domain contains a high number of random variables. The introduction of expert knowledge is recognized as an excellent solution for reducing the inherent uncertainty of the models retrieved by automatic learning methods. Previous approaches to this problem based on Bayesian statistics introduce the expert knowledge by the elicitation of informative prior probability distributions of the graph structures. In this paper, we present a new methodology for integrating expert knowledge, based on Monte Carlo simulations and which avoids the costly elicitation of these prior distributions and only requests from the expert information about those direct probabilistic relationships between variables which cannot be reliably discerned with the help of the data.
Abstract. An often used approach for detecting and adapting to concept drift when doing classification is to treat the data as i.i.d. and use changes in classification accuracy as an indication of concept drift. In this paper, we take a different perspective and propose a framework, based on probabilistic graphical models, that explicitly represents concept drift using latent variables. To ensure efficient inference and learning, we resort to a variational Bayes inference scheme. As a proof of concept, we demonstrate and analyze the proposed framework using synthetic data sets as well as a real financial data set from a Spanish bank.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.