2020
DOI: 10.1109/tmi.2020.3006138
|View full text |Cite
|
Sign up to set email alerts
|

Rectifying Supporting Regions With Mixed and Active Supervision for Rib Fracture Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(25 citation statements)
references
References 27 publications
0
25
0
Order By: Relevance
“…Instead, focus was given to semi and weak supervision learning [15], [17]- [20], either rely on pixel-level or image-level annotations, combined with unlabeled images. The most relevant research to our framework is from [1]- [3], in which the same model with both segmentation and classification output branch is jointly trained on two types of data. Our framework is different from theirs because the end-to-end classification training with entire image input is difficult due to massive size of whole slide images.…”
Section: A Mixed Supervision Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Instead, focus was given to semi and weak supervision learning [15], [17]- [20], either rely on pixel-level or image-level annotations, combined with unlabeled images. The most relevant research to our framework is from [1]- [3], in which the same model with both segmentation and classification output branch is jointly trained on two types of data. Our framework is different from theirs because the end-to-end classification training with entire image input is difficult due to massive size of whole slide images.…”
Section: A Mixed Supervision Learningmentioning
confidence: 99%
“…Pixel-level experiment is conducted based on source code of Mahendra Khened et al [53]. This generalized pathology processing framework is the 5th in Camelyon17 Challenge 1 [16], 4th in DigestPath2019 2 [15] and 3rd in PAIP challenge 3 . It uses all of existing pixellevel fine-grained labels to train patch segmentation model and image-level labels to train whole slide image classification model, without extracting hidden pixel-level pseudo labels.…”
Section: A Experimental Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, to obtain varied views of the lesions for the input disentanglement, the first step of our model is to locate the lesion regions. Previous works [18]- [20] highly rely on segmentation labels or bounding boxes for further feature disentanglement. Unfortunately, such substantial annotations of lesions are far more expensive and unavailable in our dataset, where merely category labels are accessible.…”
Section: Introductionmentioning
confidence: 99%
“…This situation especially exists for chest X-rays (CXR) as the world's commonest medical image. Apart from many unlabeled data, CXR datasets often have image-level annotations that can be easily obtained by text mining from the numerous radiological reports [26,9], while lesion-level annotations (e.g., bounding boxes) are scarce [7,27]. Therefore, efficiently leveraging available annotations to develop thoracic disease detection algorithms has significant practical value.…”
Section: Introductionmentioning
confidence: 99%