2013
DOI: 10.1007/978-3-642-36657-4_8
|View full text |Cite
|
Sign up to set email alerts
|

Statistical Relational Learning

Abstract: Relational learning refers to learning from data that have a complex structure. This structure may be either internal (a data instance may itself have a complex structure) or external (relationships between this instance and other data elements). Statistical relational learning refers to the use of statistical learning methods in a relational learning context, and the challenges involved in that. In this chapter we give an overview of statistical relational learning. We start with some motivating problems, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 54 publications
0
5
0
Order By: Relevance
“…StarAI approaches have been successfully applied to a multitude of problems in various structured, uncertain domains. We refer the reader to Getoor and Taskar (2007), Raedt and Kersting (2008), Domingos and Lowd (2009), Blockeel (2013), Kimmig et al (2015), De Raedt et al (2016), and Besold and Lamb (2017) for surveys of StarAI.…”
Section: Toward Unification With Staraimentioning
confidence: 99%
“…StarAI approaches have been successfully applied to a multitude of problems in various structured, uncertain domains. We refer the reader to Getoor and Taskar (2007), Raedt and Kersting (2008), Domingos and Lowd (2009), Blockeel (2013), Kimmig et al (2015), De Raedt et al (2016), and Besold and Lamb (2017) for surveys of StarAI.…”
Section: Toward Unification With Staraimentioning
confidence: 99%
“…This situation causes the standard gradient descent methods to converge very slowly, since there is no single appropriate learning rate for all soft-constrained clauses. An alternative approach to CLL function optimisation is max-margin training, which is better suited to problems where the goal is to maximise the classification accuracy [Huynh and Mooney 2009;2011]. Instead of optimising the CLL function, max-margin training aims to maximise the ratio between the probability of the correct truth assignment of CEs to hold and the closest competing incorrect truth assignment.…”
Section: The Event Calculusmentioning
confidence: 99%
“…Beyond the advantages stemming from the fact that it is a logic-based formalism with clear semantics, one of the most interesting properties of the Event Calculus is that it handles the persistence of CEs with domain-independent axioms. On the other hand, MLNs are a generic statistical relational framework that combines the expressivity of firstorder logic with the formal probabilistic properties of undirected graphical modelssee de Salvo Braz et al [2008], Raedt and Kersting [2010] and Blockeel [2011] for surveys on logic-based relational probabilistic models. By combining the Event Calculus with MLNs, we present a principled and powerful probabilistic logic-based method for event recognition.…”
Section: Introductionmentioning
confidence: 99%
“…This discrepancy more reasonably characterizes the differences between these distributions and addresses the aforementioned issue with the TV distance. WD has been widely used in many applications in statistics and machine learning, such as [ 17 , 18 ], as it has many desired properties. Additionally, it has been shown to be effective in mitigating information loss caused by the utilization of summary statistics in basic ABC techniques [ 19 ].…”
Section: Introductionmentioning
confidence: 99%