2018 30th International Teletraffic Congress (ITC 30) 2018
DOI: 10.1109/itc30.2018.00027
|View full text |Cite
|
Sign up to set email alerts
|

Synopsis of the PhD Thesis - Network Computations in Artificial Intelligence

Abstract: DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 61 publications
(70 reference statements)
0
5
0
Order By: Relevance
“…However, while training them with a fixed sparse topology can lead to good performance [25], it is hard to find an optimal sparse topology to fit the data distribution before training. This problems was addressed by introducing the adaptive sparse connectivity concept through its first instantiation, the Sparse Evolutionary Training (SET) algorithm [24,26]. SET is a straightforward strategy that starts from random sparse networks and can achieve good performance based on magnitude weights pruning and regrowing after each training epoch.…”
Section: Sparse Neural Networkmentioning
confidence: 99%
“…However, while training them with a fixed sparse topology can lead to good performance [25], it is hard to find an optimal sparse topology to fit the data distribution before training. This problems was addressed by introducing the adaptive sparse connectivity concept through its first instantiation, the Sparse Evolutionary Training (SET) algorithm [24,26]. SET is a straightforward strategy that starts from random sparse networks and can achieve good performance based on magnitude weights pruning and regrowing after each training epoch.…”
Section: Sparse Neural Networkmentioning
confidence: 99%
“…Dynamic sparse training (DST) is a sparse-to-sparse training paradigm to learn a sparse (small) DNN model based on a dense (big) one [19], [27]. Specifically, DST starts training with a sparse model structure initialized from a dense model.…”
Section: Dynamic Sparse Trainingmentioning
confidence: 99%
“…Mocanu et al [51] have trained sparse restricted Boltzmann machines that have fixed scale-free and small-world connectivity. After that, Mocanu et al [52,53] have introduced the sparse evolutionary training procedure and the concept of adaptive connectivity for intrinsically sparse networks to fit the data distribution. The Nest algorithm [13] gets rid of a fully connected network at the beginning by a grow-andprune paradigm, that is, expanding a small randomly initialized sparse network to a large one and then shrink it down.…”
Section: Intrinsically Sparse Neural Networkmentioning
confidence: 99%
“…Note that it is impossible to report also the accuracy for FC-MLPs as they cannot run on a typical laptop due to their very high memory and computational requirements. Moreover, even if it would be possible to run FC-MLP, this comparison is outside the scope of this paper and it would be redundant as it has been shown in [6,24,44,52,53,73] that SET-MLP typically outperforms its fully connected counterparts.…”
Section: Experimental Evaluationmentioning
confidence: 99%