2021
DOI: 10.1007/978-3-030-87240-3_77
|View full text |Cite
|
Sign up to set email alerts
|

AnaXNet: Anatomy Aware Multi-label Finding Classification in Chest X-Ray

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 23 publications
0
8
0
Order By: Relevance
“…The image encoders evaluated for global pathology detection perform similarly with an average AUC of 83%. The DenseNet169 performance reported in [1] could not be reached, which we assume might be due to our limited image resolution of 224 × 224. Neither FSRG nor the naïve transfer learning baseline reach the AUC of AnaXNet, which is built upon an object detector and uses features extracted from high-resolution crops.…”
Section: Few-shot Structured Report Generationmentioning
confidence: 86%
See 3 more Smart Citations
“…The image encoders evaluated for global pathology detection perform similarly with an average AUC of 83%. The DenseNet169 performance reported in [1] could not be reached, which we assume might be due to our limited image resolution of 224 × 224. Neither FSRG nor the naïve transfer learning baseline reach the AUC of AnaXNet, which is built upon an object detector and uses features extracted from high-resolution crops.…”
Section: Few-shot Structured Report Generationmentioning
confidence: 86%
“…All other models were trained on a single NVIDIA A40. The DenseNet and Vision Transformer classifiers were trained for 25 epochs with the same hyperparameters as in [1]: Adam optimizer, a learning rate of 1e-4 and unweighted binary cross-entropy loss. The FSRG models were fine-tuned for 10 epochs with a learning rate of 1e-4, AdamW optimizer with no weight decay, and learning rate scheduler with cosine annealing decay and 1 epoch linear warmup.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Given the initial training set of anatomical region representations {(R i , R i )} N i=1 , we define a normalized adjacency matrix A ∈ R 2k×2k that captures intra-image and inter-image region correlations. The intra-image correlations corresponding to the two k × k diagonal blocks of A are constructed based on the region-disease co-occurrence [1], i.e., the number of times two anatomical regions co-occur with the same disease or finding in the set of images R i , R i , i = 1, . .…”
Section: Methodsmentioning
confidence: 99%