2022
DOI: 10.1093/bioinformatics/btac369
|View full text |Cite
|
Sign up to set email alerts
|

transferGWAS: GWAS of images using deep transfer learning

Abstract: Motivation Medical images can provide rich information about diseases and their biology. However, investigating their association with genetic variation requires non-standard methods. We propose transferGWAS, a novel approach to perform genome-wide association studies directly on full medical images. First, we learn semantically meaningful representations of the images based on a transfer learning task, during which a deep neural network is trained on independent but similar data. Then, we pe… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

2
27
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 28 publications
(29 citation statements)
references
References 46 publications
2
27
0
Order By: Relevance
“…Many CNNs models have been trained on ImageNet and were widely used in the image processing field to learn complex patterns from images. In addition to the ResNet50 129 model used by the transferGWAS 31 , we implemented 10 more pre-trained CNN models, including the AlexNet 130 , Vgg16 131 , Vgg19 131 , GoogLeNet (Inception V1) 132 , Inception (V3) 132 , ResNet18 129 , ResNet34 129 , SqueezeNet 133 , MobileNet 134 , and ShuffleNet 135 . These pre-trained models are available on Pytorch 136 and represent different designs and architectures, such as layer depth, size of kernels, and hyperparameters.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Many CNNs models have been trained on ImageNet and were widely used in the image processing field to learn complex patterns from images. In addition to the ResNet50 129 model used by the transferGWAS 31 , we implemented 10 more pre-trained CNN models, including the AlexNet 130 , Vgg16 131 , Vgg19 131 , GoogLeNet (Inception V1) 132 , Inception (V3) 132 , ResNet18 129 , ResNet34 129 , SqueezeNet 133 , MobileNet 134 , and ShuffleNet 135 . These pre-trained models are available on Pytorch 136 and represent different designs and architectures, such as layer depth, size of kernels, and hyperparameters.…”
Section: Methodsmentioning
confidence: 99%
“…Many CNNs models have been trained on ImageNet and were widely used in the image processing field to learn complex patterns from images. In addition to the ResNet50 129 model used by the transferGWAS 31 , we implemented 10 more pre-trained CNN models, including the AlexNet 130 , Vgg16 131 , Vgg19 131 , GoogLeNet (Inception V1) 132 , Inception (V3) 132 We began by combining the original left and right retinal fundus images and the rotated images with 90°, 180°, and 270°, each with and without horizontal mirroring. Next, we input these eight retinal fundus images into each pre-trained model and averaged the outputs from the last layer of convolutional networks.…”
Section: Note: One Supplementary Information Pdf File and One Supplem...mentioning
confidence: 99%
See 3 more Smart Citations