2023
DOI: 10.1016/j.heliyon.2023.e18086
|View full text |Cite
|
Sign up to set email alerts
|

Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“…This approach may involve the direct transfer of taskrelated knowledge (in our case, individual identity) through supervised pretraining-based transfer learning, or the acquisition and transfer of knowledge unrelated to the downstream task from the source data via self-supervised pretraining-based transfer learning. Studies have shown that implementing transfer learning in deep learning models using fMRI data has enhanced model performance on downstream tasks (Hwang et al, 2023;Li et al, 2018;Rahman et al, 2022). Future studies using a deep learning model more suitable for dealing with FC data are warranted.…”
Section: Potential Weaknesses and Further Workmentioning
confidence: 99%
“…This approach may involve the direct transfer of taskrelated knowledge (in our case, individual identity) through supervised pretraining-based transfer learning, or the acquisition and transfer of knowledge unrelated to the downstream task from the source data via self-supervised pretraining-based transfer learning. Studies have shown that implementing transfer learning in deep learning models using fMRI data has enhanced model performance on downstream tasks (Hwang et al, 2023;Li et al, 2018;Rahman et al, 2022). Future studies using a deep learning model more suitable for dealing with FC data are warranted.…”
Section: Potential Weaknesses and Further Workmentioning
confidence: 99%
“…These deep networks are different in terms of structure and connection weights. Many efforts, such as transfer learning, self-supervised learning, fuzzing with invocation ordering [ 23 , 24 ], fusion models [ 25 ], hierarchical models [ 26 ], adapting feature selection algorithms [ 27 ], subspace random optimization [ 28 ], multi-modal [ 29 ], and multi-label [ 30 ] techniques have improved the performance of these models [ 31 ].…”
Section: Introductionmentioning
confidence: 99%