2020
DOI: 10.12928/telkomnika.v18i3.14868
|View full text |Cite
|
Sign up to set email alerts
|

Transfer learning with multiple pre-trained network for fundus classification

Abstract: Transfer learning (TL) is a technique of reuse and modify a pre-trained network. It reuses feature extraction layer at a pre-trained network. A target domain in TL obtains the features knowledge from the source domain. TL modified classification layer at a pre-trained network. The target domain can do new tasks according to a purpose. In this article, the target domain is fundus image classification includes normal and neovascularization. Data consist of 100 patches. The comparison of training and validation d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 16 publications
0
14
0
Order By: Relevance
“…In the future work I intend to develop a method to extract chemical information from the important regions on MTM for predicting molecular property. In this study, although simple and basic CNN architecture was used, the prediction performance of CNN using MTM is expected to be improved by using transfer learning via multiple pretrained networks, such as AlexNet, VGG19, RsNet101, GooLeNet, and Inception-V3 [ 46 ]. I believe that MTM embedded molecular structure information can serve as a valuable method of molecular representation for drug discovery.…”
Section: Discussionmentioning
confidence: 99%
“…In the future work I intend to develop a method to extract chemical information from the important regions on MTM for predicting molecular property. In this study, although simple and basic CNN architecture was used, the prediction performance of CNN using MTM is expected to be improved by using transfer learning via multiple pretrained networks, such as AlexNet, VGG19, RsNet101, GooLeNet, and Inception-V3 [ 46 ]. I believe that MTM embedded molecular structure information can serve as a valuable method of molecular representation for drug discovery.…”
Section: Discussionmentioning
confidence: 99%
“…When huge data samples (i.e., photographs) are involved, there will be some parameters in CNN layers to be considered [40], [41]. However, small datasets will cause overfitting [40].…”
Section: Transfer Learningmentioning
confidence: 99%
“…It used rotation, skewing and elastic distortion augmentation methods for images and then used a pre-trained CNN model as feature extractor and SVM as a category classifier. This technique has been applied in many fields and has shown better accuracy than traditional convolutional networks [25][26][27]. In this paper after we augmented the training set, we combined transfer learning and fine-tuning and produced five models (Fig.…”
Section: Model Design and Architecturementioning
confidence: 99%
“…The reason for this result is that models provided in this paper reduced the irrelevant information of photos. From the perspective of models, the main focus of preventing overfitting should be the entropy capacity of the model, or how much information the model can store [25,27]. There are different ways to modulate entropic capacity, the main one is the choice of the number of parameters in your model, i.e.…”
Section: )mentioning
confidence: 99%