2022
DOI: 10.3390/s22228872
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of the Application Efficiency of TensorFlow and PyTorch in Convolutional Neural Network

Abstract: In this paper, we present an analysis of important aspects that arise during the development of neural network applications. Our aim is to determine if the choice of library can impact the system’s overall performance, either during training or design, and to extract a set of criteria that could be used to highlight the advantages and disadvantages of each library under consideration. To do so, we first extracted the previously mentioned aspects by comparing two of the most popular neural network libraries—PyT… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…376,377 In their comparison of these two libraries, Novac et al concluded that PyTorch offers a more beginnerfriendly experience with faster training and execution, while TensorFlow allows for higher flexibility and better resulting accuracy. 378 These top imported libraries represent clear starting points for those who are new to the field and want to start understanding, writing, and using ML algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…376,377 In their comparison of these two libraries, Novac et al concluded that PyTorch offers a more beginnerfriendly experience with faster training and execution, while TensorFlow allows for higher flexibility and better resulting accuracy. 378 These top imported libraries represent clear starting points for those who are new to the field and want to start understanding, writing, and using ML algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…According to the mean of the Dice coefficient in the validation set, the best model was chosen for automatic segmentation. Our network was developed using the python package “Pytorch” and trained using RTX-2080Ti in the cloud computation platform “AI-Galaxy” ( http://www.ai-galaxy.cn/ ) [ 19 ].…”
Section: Methodsmentioning
confidence: 99%
“…Pretrained models such as Xception, VGG16, DenseNet121, and ResNet50 were used. The models were imported from TensorFlow [18] and trained using the dataset. Figure 4 The ReLU function has an advantage over other activation functions in that it does not stimulate all neurons simultaneously and can be used to enhance non-linearity.…”
Section: Convolutional Neural Network Modelmentioning
confidence: 99%
“…Pretrained models such as Xception, VGG16, DenseNet121, and ResNet50 were used. The models were imported from TensorFlow [18] and trained using the dataset. Figure 4 shows the model development methodology.…”
Section: Convolutional Neural Network Modelmentioning
confidence: 99%