“…Formalisms falling into this category are probabilistic horn abduction (PHA) (Poole, 1993), probabilistic logic programming (PLP) (Ng and Subrahmanian, 1992), relational Bayesian networks (RBNs) (Jaeger, 1997), Bayesian logic programming (BLP) (Kersting et al, 2000), stochastic logic programmes (SLPs) (Muggleton, 1996), PRISM (Sato and Kameya, 1997), CLP(BN) (Costa and Cussens, 2003), ProbLog , and logic programmes with annotated disjunctions (LPADs) (Vennekens et al, 2004). Since some of these languages can be translated into Bayesian networks, when the networks contain hidden variables, learning the parameters of these languages requires the use of techniques for learning from incomplete data such as the expectation maximisation (EM) algorithm (Dempster et al, 1977) or the recent relational information bottleneck (RIB) framework (Riguzzi and Di Mauro, 2012). In the indirect approach, conversely, formulae are not explicitly associated to their probability, and the probability of a possible world is defined in terms of its features by means of an associated real-valued parameter.…”