2019
DOI: 10.1007/978-3-030-11024-6_23
|View full text |Cite
|
Sign up to set email alerts
|

Ordinal Regression with Neuron Stick-Breaking for Medical Diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
44
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4

Relationship

5
4

Authors

Journals

citations
Cited by 44 publications
(44 citation statements)
references
References 19 publications
0
44
0
Order By: Relevance
“…Specifically, the MOW-Net significantly improves the recall of the unsure class by 0.28 over the previous best result. This is significant for the clinical diagnosis since a higher recall of the unsure class can [11] 0.542 0.548 0.794 0.648 0.568 0.624 0.594 0.489 0.220 0.303 NSB [10] 0.553 0.565 0.641 0.601 0.566 0.594 0.580 0.527 0.435 0.476 UDM [9] 0.548 0.541 0.767 0.635 0.712 0.515 0.598 0.474 0.320 0.382 CORF [12] 0. encourage more follow-ups and reduce the probabilities of the nodules that are misdiagnosed as malignant or benign. In addition, the precision of benign and the recall of the malignant get a great improvement.…”
Section: Classification Performancementioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, the MOW-Net significantly improves the recall of the unsure class by 0.28 over the previous best result. This is significant for the clinical diagnosis since a higher recall of the unsure class can [11] 0.542 0.548 0.794 0.648 0.568 0.624 0.594 0.489 0.220 0.303 NSB [10] 0.553 0.565 0.641 0.601 0.566 0.594 0.580 0.527 0.435 0.476 UDM [9] 0.548 0.541 0.767 0.635 0.712 0.515 0.598 0.474 0.320 0.382 CORF [12] 0. encourage more follow-ups and reduce the probabilities of the nodules that are misdiagnosed as malignant or benign. In addition, the precision of benign and the recall of the malignant get a great improvement.…”
Section: Classification Performancementioning
confidence: 99%
“…However, the UDM has some additional parameters that need to be carefully tuned. The neural stickbreaking (NSB) method calculates the probabilities through the C − 1 predicted classification bounds, where C is the number of classes [10]. The unimodal method makes each fully-connected output follows a unimodal distribution such as Poisson or Binomial [11].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, N − 1 weights need to be manually fine-tuned to balance the CE loss of each branch. [34,22] proposes to use a stick-breaking process to re-parametrize the outputs of N − 1 units that is associated with the Bayesian nonparametric Dirichlet process. Embarking on this, the cumulative probabilities can achieve the expected monotonical decrease, but it couldn't be unheeded that it's considerably more sophisticate than standard CE-loss.…”
Section: Introductionmentioning
confidence: 99%
“…Compared to the case of a single image [24,37,31,38,29,26], richer and complementary information can be expected in a set, because the samples are captured from multiple views [36]. However, it also poses several challenges including a) variable number of samples within a set, b) larger inner-set variability than its video-based recognition counterpart, and c) order-less data.…”
Section: Introductionmentioning
confidence: 99%