2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA) 2021
DOI: 10.1109/icmla52953.2021.00049
|View full text |Cite
|
Sign up to set email alerts
|

Progressive Transmission and Inference of Deep Learning Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…A progressive transfer framework for deep learning models is introduced in [76] by using the principle of transferring large image files over the web. The framework enables a deep learning model to be divided and progressively transferred in stages.…”
Section: Optimizing the Model Prior To Deploymentmentioning
confidence: 99%
“…A progressive transfer framework for deep learning models is introduced in [76] by using the principle of transferring large image files over the web. The framework enables a deep learning model to be divided and progressively transferred in stages.…”
Section: Optimizing the Model Prior To Deploymentmentioning
confidence: 99%
“…It is well established that deep learningbased supervised medical image classification requires accurately annotated labels to effectively train classification models. Supervised training with inaccurately annotated or noisy labels can impair a model's generalizability, resulting in subpar test performance [13], [27], [11], [10].…”
Section: Introductionmentioning
confidence: 99%
“…However, yet to date, the community still has a limited understanding of the fundamental cause and mitigation of the overconfidence issue. Furthermore, existing robust algorithms only utilize in-distribution data for training and typically considered out-of-distribution (OOD) examples to be harmful to the training of deep neural networks [32][33][34][35].…”
Section: Motivationsmentioning
confidence: 99%
“…Label noise can be separated into two categories: 1) closed-set noise, where instances with noisy labels have true class labels within the noisy label set [27,59,63,67,69,160]. 2) open-set noise, where instances with noisy labels have some true class labels outside the noisy label set [32][33][34][35]. In the existing literature, open-set noises are always considered to be harmful to the training of DNNs like closed-set noises [32][33][34][35].…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation