2020
DOI: 10.3390/e22020204
|View full text |Cite
|
Sign up to set email alerts
|

Emergence of Network Motifs in Deep Neural Networks

Abstract: Network science can offer fundamental insights into the structural and functional properties of complex systems. For example, it is widely known that neuronal circuits tend to organize into basic functional topological modules, called network motifs. In this article we show that network science tools can be successfully applied also to the study of artificial neural networks operating according to selforganizing (learning) principles. In particular, we study the emergence of network motifs in multi-layer perce… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 35 publications
1
7
0
Order By: Relevance
“…Therefore, we propose constructing a neural network population with different properties within the most common practical scenarios in the field. This approach allows the comparison between a wide range of neural networks, which is one of the main differences to previous works such as [14,15].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, we propose constructing a neural network population with different properties within the most common practical scenarios in the field. This approach allows the comparison between a wide range of neural networks, which is one of the main differences to previous works such as [14,15].…”
Section: Methodsmentioning
confidence: 99%
“…In a 2020 work [14], CNs are employed for the analysis of 5-layer deep-belief networks, and results suggest correlations between topological properties and functioning. Another 2020 work [15] explores the emergence of network motifs, i.e., a group of neurons with a distinct connection pattern, on a small Multilayer Perceptron (MLP) ANN. They considered different number of neurons on this architecture applied on small synthetic data, and concludes that specific motifs emerge during training.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers increasingly recognize, however, that it is not only the parallel structure but also the specific way in which neurons are connected that explains performance differences (Luo, 2021 ). Indeed, the importance of a particular network topology in explaining this network's function is currently an active area of research and some of the insights are already being reflected in the way neural networks are being set up in order to further enhance their performance (Zambra et al, 2020 ). Related, the brain seems to be hardwired for particular tasks that are important for our social experience.…”
Section: Understanding the Mechanisms Of The Trilemmamentioning
confidence: 99%
“…In [18], CNT metrics to distill information from Deep Belief Networks: Deep Belief Networks -which are generative models that differ from feedforward neural networks as the learning phase is generally unsupervised -are studied with the lens of CNT by turning their architectures into a stack of equivalent Restricted Boltzmann Machines. An application of CNT metrics to feed-forward neural networks is described in [20], where the analysis is focused on the emergence of motifs, i.e., connections that present both an interesting geometric shape and strong values of the corresponding Link-Weights. CNT has also remarkably used to assess the parallel processing capability of the network architectures [13].…”
Section: Related Workmentioning
confidence: 99%
“…3) CNT Analysis: We choose MNIST [10] as a test bed for ensemble analysis 4 . We normalize the data-sets before training so that each input dimension spans from 0. to 1., while DNNs weights are unbounded in R. As a complement to the ensemble analysis and as an extension of the work in [20], we use our framework to study the dynamics of individual DNNs on CIFAR10 [8], where we compute CNT metrics for different snapshots (at different levels of accuracy) of a single neural network. Patterns that are specific to that instance are hence local.…”
Section: ) Trainingmentioning
confidence: 99%