2021
DOI: 10.1109/tpami.2021.3051276
|View full text |Cite
|
Sign up to set email alerts
|

AutoML for Multi-Label Classification: Overview and Empirical Evaluation

Abstract: Automated machine learning (AutoML) supports the algorithmic construction and data-specific customization of machine learning pipelines, including the selection, combination, and parametrization of machine learning algorithms as main constituents. Generally speaking, AutoML approaches comprise two major components: a search space model and an optimizer for traversing the space. Recent approaches have shown impressive results in the realm of supervised learning, most notably (single-label) classification (SLC).… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 56 publications
(22 citation statements)
references
References 37 publications
0
21
0
1
Order By: Relevance
“…A recent AutoML benchmark for multi-label classification Wever et al (2021) proposed a general tool with a configurable search space and optimizer, which allows for the inclusion of new methods and ablation studies. Unfortunately, this approach requires that existing AutoML frameworks be re-implemented within this tool, which is difficult in such a rapidly developing field.…”
Section: Evaluation Of Automated Machine Learningmentioning
confidence: 99%
“…A recent AutoML benchmark for multi-label classification Wever et al (2021) proposed a general tool with a configurable search space and optimizer, which allows for the inclusion of new methods and ablation studies. Unfortunately, this approach requires that existing AutoML frameworks be re-implemented within this tool, which is difficult in such a rapidly developing field.…”
Section: Evaluation Of Automated Machine Learningmentioning
confidence: 99%
“…While Feurer et al [17] report computational cost of 11 CPU-years (CPUy), the experimental data of Mohr, Wever, and Hüllermeier [44] is the result of 52 CPUy worth of computations. In [77], the experimental study is as extensive as 84 CPUy. In the sub-field of neural architecture search (NAS) [13], computational costs for experimentation can even be higher.…”
Section: Benchmarkingmentioning
confidence: 99%
“…Obviously, the energy consumption is relatively low, however, energy-efficiency in turn is poor since the information obtained through investing energy is not as valuable as desired. Various benchmarks for AutoML tools have already identified a plethora of confounding factors [3,21,77,86], which hinder interpretation of the results and insights derived from them.…”
Section: Reproducibility and Comparabilitymentioning
confidence: 99%
“…In AutoML [16,11,44,48], the learner is no longer restricted to a pre-defined learning model (H, A), consisting of a hypothesis space and a learning algorithm. Instead, it can choose among a broad spectrum of learning algorithms A ∈ A, each associated with an underlying hypothesis space H A .…”
Section: Automated Machine Learningmentioning
confidence: 99%