2013
DOI: 10.1007/978-3-642-38812-5_3
|View full text |Cite
|
Sign up to set email alerts
|

Propositionalisation of Continuous Attributes beyond Simple Aggregation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2015
2015
2016
2016

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…It is shown in El Jelali et al (2012) that no single propositionalization and attribute-value learner wins on all datasets. Moreover, we discussed the importance of finding an optimal combination on the training dataset in Section 1.…”
Section: The Wpc Algorithmmentioning
confidence: 96%
See 1 more Smart Citation
“…It is shown in El Jelali et al (2012) that no single propositionalization and attribute-value learner wins on all datasets. Moreover, we discussed the importance of finding an optimal combination on the training dataset in Section 1.…”
Section: The Wpc Algorithmmentioning
confidence: 96%
“…In addition, we propose an effective wrapping algorithm to select the best combination of propositionalization and classification methods which adds more flexibility in relational data mining for online/incremental/data stream environment. This paper includes substantively new and different contributions beyond the preliminary conference version (El Jelali, Braud, & Lachiche, 2012) including detailed presentations of the approaches with revised and optimized algorithms, enhanced motivations with real-life applications, extended experiments with new classifiers and more real-life benchmarks, introduction and evaluation of a wrapping algorithm to identify the best combination of propositionalization and classification methods from training data.…”
Section: Complex Aggregatesmentioning
confidence: 99%
“…As another example, the input data can be transformed into a normalised or a discretised version [17], using the data distribution. Note that in this case we apply the transformation to the input features both during training and during testing.…”
Section: Input Reframingmentioning
confidence: 99%