2021
DOI: 10.1007/978-3-030-87240-3_66
|View full text |Cite
|
Sign up to set email alerts
|

Combining 3D Image and Tabular Data via the Dynamic Affine Feature Map Transform

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(9 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…The above attention-based fusion methods rescaled features through complementary information from another modality, while Pölsterl et al [52] proposed a dynamic affine transform module that shifted the feature map. The proposed modules dynamically produced scale factor and offset conditional on both image and clinical data.…”
Section: Attention-based Fusion Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The above attention-based fusion methods rescaled features through complementary information from another modality, while Pölsterl et al [52] proposed a dynamic affine transform module that shifted the feature map. The proposed modules dynamically produced scale factor and offset conditional on both image and clinical data.…”
Section: Attention-based Fusion Methodsmentioning
confidence: 99%
“…The missing value is a common problem for some structured data. The ones with a high missing rate were usually discarded directly, while the other missing data were imputed with the average value, mode value, or values of similar samples selected by K-nearest neighbors [16,24], and some works added missing status as features [52,68].…”
Section: Structured Datamentioning
confidence: 99%
“…Second, applying the recently introduced Dynamic Affine Feature Map Transform (DAFT, Fig. 3 ) [ 19 ]. It is a general-purpose module for CNNs that incites or represses high-level concepts learned from a 3D image by conditioning feature maps of a convolutional layer on both a patient's image and tabular clinical information.…”
Section: Methodsmentioning
confidence: 99%
“…Each landmark patch was then fed into the CNN models, which produced the final classification result using the maximum voting strategy. Pölsterl et al [35] proposed the dynamic affine feature map transform, an auxiliary module for CNNs that dynamically incites or represses each feature map of a convolutional layer based on both image and tabular biomarkers. A more detailed overview of deep learning algorithms for Alzheimer's disease classification can be found in [36], [14], [37].…”
Section: Literature Reviewmentioning
confidence: 99%