2016 IEEE 16th International Conference on Data Mining (ICDM) 2016
DOI: 10.1109/icdm.2016.0123
|View full text |Cite
|
Sign up to set email alerts
|

ExploreKit: Automatic Feature Generation and Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
93
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 136 publications
(104 citation statements)
references
References 8 publications
0
93
0
1
Order By: Relevance
“…not hosted by SPARQL endpoints). Finally, it would be interesting to investigate techniques for automatic feature selection [22], such as those described in [9], where a novel machine learning-based feature selection method is used for predicting candidates features usefulness.…”
Section: Resultsmentioning
confidence: 99%
“…not hosted by SPARQL endpoints). Finally, it would be interesting to investigate techniques for automatic feature selection [22], such as those described in [9], where a novel machine learning-based feature selection method is used for predicting candidates features usefulness.…”
Section: Resultsmentioning
confidence: 99%
“…In this section, we state motivation of our work: why we consider high-order cross features and why existing works do not suit our purpose. While most early works of automatic feature generation focus on second-order interactions of original features [5,6,19,21,34], trends have appeared to consider higher-order (i.e., with order higher than two) interactions to make data more informative and discriminative [2,26,33,42]. High-order cross features, just like other high-order interactions, can further improve the quality of data and increase predictive power of learning algorithms.…”
Section: Motivationmentioning
confidence: 99%
“…On the one hand, search-based feature generation methods employ explicit search strategies to construct useful features or feature sets. Many such methods focus on numerical features [11,20,21,35,36], and do not generate cross features. As for existing feature crossing methods [5,34], they are not designed, and are hence inefficient, to perform high-order feature crossing.…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…Most of existing methods of automatic FE either generate a large set of possible features by predefined transformation operators followed by feature selection [3,7,15] or apply simple supervised learning (simple algorithm and/or simple meta-features derived from FE process) to recommend a potentially useful feature [4,5,9]. The former makes the process computationally expensive, which is even worse for complex features, while the latter significantly limits the performance boost.…”
Section: Introductionmentioning
confidence: 99%