2019
DOI: 10.1101/535377
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transfer learning of deep neural network representations for fMRI decoding

Abstract: BackgroundDeep neural networks have revolutionised machine learning, with unparalleled performance in object classification. However, in brain imaging (e.g. fMRI), the direct application of Convolutional Neural Networks (CNN) to decoding subject states or perception from imaging data seems impractical given the scarcity of available data. New methodIn this work we propose a robust method to transfer information from deep learning (DL) features to brain fMRI data with the goal of decoding. By adopting Reduced R… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 60 publications
0
3
0
Order By: Relevance
“…The second most popular strategy to apply transfer learning was fine-tuning certain parameters in a pretrained CNN [ 34 , 127 , 128 , 129 , 130 , 131 , 132 , 133 , 134 , 135 , 136 , 137 , 138 , 139 , 140 , 141 , 142 , 143 , 144 , 145 , 146 ]. The remaining approaches first optimized a feature extractor (typically a CNN or a SVM), and then trained a separated model (SVMs [ 30 , 45 , 147 , 148 , 149 ], long short-term memory networks [ 150 , 151 ], clustering methods [ 148 , 152 ], random forests [ 70 , 153 ], multilayer perceptrons [ 154 ], logistic regression [ 148 ], elastic net [ 155 ], CNNs [ 156 ]). Additionally, Yang et al [ 157 ] ensembled CNNs and fine-tuned their individual contribution.…”
Section: Resultsmentioning
confidence: 99%
“…The second most popular strategy to apply transfer learning was fine-tuning certain parameters in a pretrained CNN [ 34 , 127 , 128 , 129 , 130 , 131 , 132 , 133 , 134 , 135 , 136 , 137 , 138 , 139 , 140 , 141 , 142 , 143 , 144 , 145 , 146 ]. The remaining approaches first optimized a feature extractor (typically a CNN or a SVM), and then trained a separated model (SVMs [ 30 , 45 , 147 , 148 , 149 ], long short-term memory networks [ 150 , 151 ], clustering methods [ 148 , 152 ], random forests [ 70 , 153 ], multilayer perceptrons [ 154 ], logistic regression [ 148 ], elastic net [ 155 ], CNNs [ 156 ]). Additionally, Yang et al [ 157 ] ensembled CNNs and fine-tuned their individual contribution.…”
Section: Resultsmentioning
confidence: 99%
“…Most fMRI experiments comprise tens to hundreds of participants due to experimental costs or participant selection. It is natural to use transfer learning to alleviate the data scarcity problem in the target domain (e.g., small sample datasets) by utilizing the knowledge acquired in the source domain (e.g., large cohorts; Gao, Zhang, Wang, Guo, & Zhang, 2019; Svanera et al, 2019; Thomas, Müller, & Samek, 2019; X. Wang et al, 2020). The fMRI data vary across datasets (e.g., scanner, scanning parameters, task design, template space), so it remains an open question how far the DNN can transfer‐learn in fMRI.…”
Section: Introductionmentioning
confidence: 99%
“…For diagnosis and classification purposes, simple conceptual decision-making models that can learn are widely used. [15][16][17][18] The decision tree is a reliable and effective decision-making technique which in general provides high classification accuracy; moreover, interpretation and implementation of these models are quite easy compared to most classification algorithms.…”
Section: Introductionmentioning
confidence: 99%