2014
DOI: 10.14778/2735508.2735510
|View full text |Cite
|
Sign up to set email alerts
|

QuickFOIL

Abstract: Inductive Logic Programming (ILP) is a classic machine learning technique that learns first-order rules from relationalstructured data. However, to-date most ILP systems can only be applied to small datasets (tens of thousands of examples). A long-standing challenge in the field is to scale ILP methods to larger data sets. This paper presents a method called QuickFOIL that addresses this limitation. QuickFOIL employs a new scoring function and a novel pruning strategy that enables the algorithm to find highqua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 45 publications
(16 citation statements)
references
References 36 publications
0
16
0
Order By: Relevance
“…In contrast, top-down approach starts with the most general clauses and then specializes them. A top-down algorithm guided by heuristics is better suited for large-scale and/or noisy datasets [22], particularly, because it is scalable.…”
Section: Inductive Logic Programming (Ilp)mentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast, top-down approach starts with the most general clauses and then specializes them. A top-down algorithm guided by heuristics is better suited for large-scale and/or noisy datasets [22], particularly, because it is scalable.…”
Section: Inductive Logic Programming (Ilp)mentioning
confidence: 99%
“…The use of a greedy heuristic allows FOIL to run much faster than bottom-up approaches and scale up much better. For instance, the QuickFOIL system [22] can deal with millions of training examples in a reasonable time.…”
Section: Inductive Logic Programming (Ilp)mentioning
confidence: 99%
“…After three to eight iterations, the means of the precision intervals converge to about 0.91 for T + and even to 0.94 for T − , which confirms that our iterative GILP approach is particularly suitable for pruning negative facts. ILP is widely applied in many applications, such as event recognition [13], data cleaning [22], classification [18], and many others. In this paper, we extend ILP to handle a non-fixed training set, while the learning process is guided by user feedback in an iterative and interactive manner.…”
Section: Effectivenessmentioning
confidence: 99%
“…There are a number of recent approaches that specifically tackle the problem of learning consistency constraints from a given KB (or, respectively, from a fixed training subset of the KB) for data-cleaning purposes (see [21] for a recent overview). The kinds of constraints considered for data cleaning traditionally comprise functional dependencies, conditional functional dependencies [6], equality-generating dependencies [5], denial constraints and more general Horn clauses [9,22]. The common rationale behind these learning approaches is that "unusual implies incorrect".…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation