2015
DOI: 10.1007/s10994-015-5510-3
|View full text |Cite
|
Sign up to set email alerts
|

Bandit-based Monte-Carlo structure learning of probabilistic logic programs

Abstract: Probabilistic logic programming can be used to model domains with complex and uncertain relationships among entities. While the problem of learning the parameters of such programs has been considered by various authors, the problem of learning the structure is yet to be explored in depth. In this work we present an approximate search method based on a one-player game approach, called LEMUR. It sees the problem of learning the structure of a probabilistic logic program as a multi-armed bandit problem, relying o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 56 publications
(66 reference statements)
0
11
0
Order By: Relevance
“…• We performed tests on the datasets of Nguembang Fadja and Riguzzi (2018) plus the Bongard dataset (Bongard 1970), to which the Bongard Problem of Example 1 is inspired. Note that SLIPCOVER, LIFTCOVER and LEMUR can be seen as a baseline for comparison with respect to PASCAL, since they have already been compared with many stateof-art systems in our previous works (Bellodi and Riguzzi 2015;Di Mauro et al 2015), demonstrating that they were competitive or superior with respect to the MLNs learning systems.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…• We performed tests on the datasets of Nguembang Fadja and Riguzzi (2018) plus the Bongard dataset (Bongard 1970), to which the Bongard Problem of Example 1 is inspired. Note that SLIPCOVER, LIFTCOVER and LEMUR can be seen as a baseline for comparison with respect to PASCAL, since they have already been compared with many stateof-art systems in our previous works (Bellodi and Riguzzi 2015;Di Mauro et al 2015), demonstrating that they were competitive or superior with respect to the MLNs learning systems.…”
Section: Methodsmentioning
confidence: 99%
“…LIFTCOVER can exploit either an expectation maximization (EM) algorithm (Dempster et al 1977) or L-BFGS (Nocedal 1980) to maximize the log-likelihood during parameter learning, so results show both variants. LEMUR settings can be found in subsection 7.1 of Di Mauro et al (2015) for the three datasets in common (Carcinogenesis, Mondial, Mutagenesis).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…EMBLEM (Bellodi and Riguzzi 2013) performs parameter learning in cplint on SWISH using a special dynamic programming algorithm operating on BDDs. For structure learning, cplint on SWISH includes SLIPCOVER (Bellodi and Riguzzi 2015), which performs a search in the space of clauses and scores them using the likelihood of the data after parameter learning by EMBLEM, and LEMUR (Di Mauro et al 2015), which is similar to SLIPCOVER but uses Monte Carlo tree search.…”
Section: Extending Swishmentioning
confidence: 99%