2012
DOI: 10.1007/978-3-540-75197-7
|View full text |Cite
|
Sign up to set email alerts
|

Foundations of Rule Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
212
0
5

Year Published

2013
2013
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 300 publications
(218 citation statements)
references
References 29 publications
1
212
0
5
Order By: Relevance
“…In the experimental study, we used a model that coincides by its structure with (15) and (16), but has randomized parameters and noises:…”
Section: Randomized Model (Decision Rule)mentioning
confidence: 99%
See 1 more Smart Citation
“…In the experimental study, we used a model that coincides by its structure with (15) and (16), but has randomized parameters and noises:…”
Section: Randomized Model (Decision Rule)mentioning
confidence: 99%
“…Relevant references to them can be found in monographs [1][2][3][4][5][6][7][8], lectures [9,10] and reviews [11][12][13]. The recent fundamental works [6,14,15] clarify the vast diversity of classification algorithms and its learning procedures.…”
Section: Introductionmentioning
confidence: 99%
“…The SD approach hence relies on a quality measure which evaluates the singularity of the subgroup within the population regarding a target class function: the class attribute. The choice of the measure depends on the dataset but also on the purpose of the application [8]. There are two main kind of quality measures: the first one is used with monolabeled dataset, e.g., the F-1 measure, the WRAcc measure, the Giny index or the entropy (the original SD [17]) ; and the second one is used with multilabeled dataset, e.g., the Weighted Kullback-Leibler divergence(WKL) as used in EMM [6]).…”
Section: Subgroup Discoverymentioning
confidence: 99%
“…The original F-Score. Complete surveys help understanding how to choose the right measure [8]. The generalized version of the WKL 1 considers the labels in the subset L ⊆ Dom(C) as independent and does not look for their cooccurrences.…”
Section: Redescription Miningmentioning
confidence: 99%
“…The results of rule-based methods may show closer resemblance to the reasoning of psychologists working in applied settings, and allow for direct identification of high-and/or low-risk patients. Due to their interpretability and flexibility, rulebased methods have already gained popularity in the areas of machine learning and data mining (Fürnkranz, Gamberger, & Lavrač, 2012). Furthermore, the decision rules resulting from the application of rule-based methods can be represented as fast and frugal trees (FFTs; Martignon, Vitouch, Takezawa, & Forster, 2003): graphically represented decision tools, developed within the area of heuristical decision making (Gigerenzer & Goldstein, 1996;Gigerenzer, Todd, & the ABC Research Group, 1999).…”
Section: Introductionmentioning
confidence: 99%