2022
DOI: 10.1016/j.procs.2022.12.023
|View full text |Cite
|
Sign up to set email alerts
|

Time Complexity in Deep Learning Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(10 citation statements)
references
References 10 publications
0
9
0
1
Order By: Relevance
“…In terms of dataset, number of samples, or size of the data in terms of GB or TB, number of output class labels. A demonstration on computing the time complexity associated with the convolutional model can be found in Reference 98. Results reveal that the designed model records the tradeoff between the model's accuracy versus running time.…”
Section: Resultsmentioning
confidence: 99%
“…In terms of dataset, number of samples, or size of the data in terms of GB or TB, number of output class labels. A demonstration on computing the time complexity associated with the convolutional model can be found in Reference 98. Results reveal that the designed model records the tradeoff between the model's accuracy versus running time.…”
Section: Resultsmentioning
confidence: 99%
“…In this matter, it has been demonstrated through early trials that are incorporating a retrainable encoder as the primary layer of an advanced HAR classifier results in achieving the highest level of performance. In order to justify its computational cost, it is essential to demonstrate that the superior performance of the network is not solely due to its larger structure or the presence of more trainable parameters [ 28 ]. Please note that when an encoder layer is added to the classifier, and it requires retraining, the number of trainable parameters will increase, despite the increase in performance.…”
Section: System Methodsmentioning
confidence: 99%
“…Two other factors that are sensitive and have an impact on computational complexity are batch size ()b$$ (b) $$ and learning rate ()η$$ \left(\eta \right) $$, according to study findings these two parameters must be multiplied 58 . So, after considering batch size and learning rate, overall time complexity can be defined as: ()scriptO()n.k.d2+scriptO()n2.k.normald.normalb.η$$ \left(\mathcal{O}\left(n.k.…”
Section: Proposed Modelmentioning
confidence: 99%