2022
DOI: 10.48550/arxiv.2202.04925
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Decomposing neural networks as mappings of correlation functions

Abstract: Understanding the functional principles of information processing in deep neural networks continues to be a challenge, in particular for networks with trained and thus non-random weights. To address this issue, we study the mapping between probability distributions implemented by a deep feed-forward network. We characterize this mapping as an iterated transformation of distributions, where the non-linearity in each layer transfers information between different orders of correlation functions. This allows us to… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 24 publications
(34 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?