2019
DOI: 10.1007/978-3-030-33676-9_40
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Classifier Chains for Multi-label Learning

Abstract: In this paper, we deal with the task of building a dynamic ensemble of chain classifiers for multi-label classification. To do so, we proposed two concepts of classifier chains algorithms that are able to change label order of the chain without rebuilding the entire model. Such modes allows anticipating the instance-specific chain order without a significant increase in computational burden. The proposed chain models are built using the Naive Bayes classifier and nearest neighbour approach as a base single-lab… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 61 publications
0
4
0
Order By: Relevance
“…Comparison Methods. To report reliable comparison results, MLDE is compared with 4 state-of-the-art MLC ensemble methods proposed in recent years, including DECC (Trajdos and Kurzynski 2019), ACkEL (Wang et al 2021), MLWSE (Xia, Chen, and Yang 2021), BOOMER (Rapp et al 2020(Rapp et al , 2021 Hyper-Parameter Setting. For AdaBoost.C2, we set T = 10 as its upper limit of iteration rounds and δ = 0.01 as the lower limit of iteration error.…”
Section: Experiments Experimental Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…Comparison Methods. To report reliable comparison results, MLDE is compared with 4 state-of-the-art MLC ensemble methods proposed in recent years, including DECC (Trajdos and Kurzynski 2019), ACkEL (Wang et al 2021), MLWSE (Xia, Chen, and Yang 2021), BOOMER (Rapp et al 2020(Rapp et al , 2021 Hyper-Parameter Setting. For AdaBoost.C2, we set T = 10 as its upper limit of iteration rounds and δ = 0.01 as the lower limit of iteration error.…”
Section: Experiments Experimental Settingmentioning
confidence: 99%
“…Many methods propose to address this problem by heuristically providing a good chain order (Kajdanowicz and Kazienko 2013;Jun et al 2019). As an optimal order is hard to find, some ensemble methods (Read et al 2011;Trajdos and Kurzynski 2019), instead, provide an ensemble of CC learners in different orders to avoid the uncertainty of a single order. Unfortunately, so far, the error propagation problem, together with the chain order issue, have not been well solved (Read et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…However, the authors do not examine the difference between these two strategies. Trajdos and Kurzynski (2019) introduce dynamic chaining based on accuracy estimates of binary relevance predictions in a local neighborhood of the test example, and employ this technique for nearest neighbor and Naïve Bayes classifiers, but, in the latter case, have to make a conditional independence assumption on the label predictions as well.…”
Section: Related Workmentioning
confidence: 99%
“…Then, the final data representation of the first data representation   1 X , i.e., is obtained. (6) in which the output β is calculated by…”
Section: Hmentioning
confidence: 99%