2011
DOI: 10.1007/s10994-011-5247-6
|View full text |Cite
|
Sign up to set email alerts
|

Applying the information bottleneck to statistical relational learning

Abstract: In this paper we propose to apply the Information Bottleneck (IB) approach to the sub-class of Statistical Relational Learning (SRL) languages that are reducible to Bayesian networks. When the resulting networks involve hidden variables, learning these languages requires the use of techniques for learning from incomplete data such as the Expectation Maximization (EM) algorithm. Recently, the IB approach was shown to be able to avoid some of the local maxima in which EM can get trapped when learning with hidden… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 17 publications
(21 citation statements)
references
References 31 publications
0
21
0
Order By: Relevance
“…We implemented EMBLEM in Yap Prolog 4 and we compared it with RIB [31]; CEM, an implementation of EM based on the cplint inference library [27,30]; LeProblog [9,10] and Alchemy [24]. All experiments were performed on Linux machines with an Intel Core 2 Duo E6550 (2333 MHz) processor and 4 GB of RAM.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We implemented EMBLEM in Yap Prolog 4 and we compared it with RIB [31]; CEM, an implementation of EM based on the cplint inference library [27,30]; LeProblog [9,10] and Alchemy [24]. All experiments were performed on Linux machines with an Intel Core 2 Duo E6550 (2333 MHz) processor and 4 GB of RAM.…”
Section: Methodsmentioning
confidence: 99%
“…RIB [31] performs parameter learning using the information bottleneck approach, which is an extension of EM targeted especially towards hidden variables. However, it works best when interpretations have the same Herbrand base, which is not always the case.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other EM-based approaches for parameter learning include PRISM (Sato and Kameya 2001), LFI-ProbLog (Gutmann et al 2011), ProbLog2 (Fierens et al 2013) and RIB (Riguzzi and Di Mauro 2012).…”
Section: Parameter Learningmentioning
confidence: 99%
“…Formalisms falling into this category are probabilistic horn abduction (PHA) (Poole, 1993), probabilistic logic programming (PLP) (Ng and Subrahmanian, 1992), relational Bayesian networks (RBNs) (Jaeger, 1997), Bayesian logic programming (BLP) (Kersting et al, 2000), stochastic logic programmes (SLPs) (Muggleton, 1996), PRISM (Sato and Kameya, 1997), CLP(BN) (Costa and Cussens, 2003), ProbLog , and logic programmes with annotated disjunctions (LPADs) (Vennekens et al, 2004). Since some of these languages can be translated into Bayesian networks, when the networks contain hidden variables, learning the parameters of these languages requires the use of techniques for learning from incomplete data such as the expectation maximisation (EM) algorithm (Dempster et al, 1977) or the recent relational information bottleneck (RIB) framework (Riguzzi and Di Mauro, 2012). In the indirect approach, conversely, formulae are not explicitly associated to their probability, and the probability of a possible world is defined in terms of its features by means of an associated real-valued parameter.…”
Section: Statistical Relational Learningmentioning
confidence: 99%