2022 IEEE International Conference on Data Mining (ICDM) 2022
DOI: 10.1109/icdm54844.2022.00068
|View full text |Cite
|
Sign up to set email alerts
|

Fast Stochastic Recursive Momentum Methods for Imbalanced Data Mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…The momentum average is applied to both the outer and inner estimators to track individual ranking scores. More recently, algorithms proposed in [41] reduce convergence complexity with the parallel speed-up and Jiang et al [20], Wu et al [47] introduced the momentum-based variance-reduction technology into AUPRC maximization to reduce the convergence complexity. While we developed distributed AUPRC optimization concurrently with [13], they pay attention to X-Risk Optimization in federated learning.…”
Section: Auprc Maximizationmentioning
confidence: 99%
See 3 more Smart Citations
“…The momentum average is applied to both the outer and inner estimators to track individual ranking scores. More recently, algorithms proposed in [41] reduce convergence complexity with the parallel speed-up and Jiang et al [20], Wu et al [47] introduced the momentum-based variance-reduction technology into AUPRC maximization to reduce the convergence complexity. While we developed distributed AUPRC optimization concurrently with [13], they pay attention to X-Risk Optimization in federated learning.…”
Section: Auprc Maximizationmentioning
confidence: 99%
“…Because X-Risk optimization is a sub-problem in conditional stochastic optimization and federated learning could be regarded as decentralized learning with a specific network topology (seen 3.1), our methods could also be applied to their problem. Overall, existing methods mainly focus on finite-sum singlemachine setting [20,37,41,42,47]. To solve the biased stochastic gradient, they maintain an inner state for local each data point.…”
Section: Auprc Maximizationmentioning
confidence: 99%
See 2 more Smart Citations
“…These methods innovatively updated the biased estimations for each data point and applied momentum averages to both the outer and inner estimators, tracking individual ranking scores. Further advancements in the field [16,17,18] have introduced advanced algorithms that leverage techniques like parallel speed-up and variance-reduction to improve convergence rates. These approaches hold the potential to boost the efficacy of doublet detection tools, facilitating a more accurate annotation of doublets in imbalanced datasets.…”
Section: Introductionmentioning
confidence: 99%