2022
DOI: 10.1109/tsmc.2021.3125040
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Instance Ensemble Learning With Discriminative Bags

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…In general, each component has a promoting effect on the accuracy of the model, and their cooperation makes the accuracy of the model significantly improved. The analysis of obtaining the gains may be as follows: (a) the grouping strategy in the BAGS module improves the model’s detection ability to defects in tail classes, so as to the overall detection accuracy is improved significantly; 42 (b) the ranking strategy based on anchor sample importance in the ISR module increases the model’s attention to important anchor samples and the important anchor samples are essential for the performance improvement of the model. 28…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In general, each component has a promoting effect on the accuracy of the model, and their cooperation makes the accuracy of the model significantly improved. The analysis of obtaining the gains may be as follows: (a) the grouping strategy in the BAGS module improves the model’s detection ability to defects in tail classes, so as to the overall detection accuracy is improved significantly; 42 (b) the ranking strategy based on anchor sample importance in the ISR module increases the model’s attention to important anchor samples and the important anchor samples are essential for the performance improvement of the model. 28…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…K-means and SVM [56] 83% Cascaded Deep Learning and Random Forests [57] 77.2% ANN [58] 75.9% Feed Forward Neural Network [59] 74.6% Extreme Learning Machine [60] 81.8% Faster-RCNN [61] 71.2% CNN-based Framework [50] 85.2% Attention [8] 86.2% Gated Attention [8] 86.4% mi-Net Attention [52] 86.7% ELDB [53] 85.8% TGA-MIL (ours)…”
Section: Algorithms Sensitivitymentioning
confidence: 99%
“…StableMIL [23] builds upon identifying a novel connection between MIL and the potential outcome framework in causal effect estimation. ELDB [18] introduces discriminative analysis and selfreinforcement mechanisms under the concept of continual learning to maximize the designed discriminative optimization goal. Some others also include PL [11], AEMI [20], MIHI [19] and so on.…”
Section: Related Workmentioning
confidence: 99%
“…Attention-net [6] and loss-attention [10] are two popular MIL networks with well-known attention mechanisms; 3. ELDB [18] designs the discriminative analysis and self-reinforcement mechanisms to optimize the distinguishability of bags' embedding vectors. 4.…”
Section: Comparative Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation