2022
DOI: 10.48550/arxiv.2205.01940
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs

Abstract: This paper aims to theoretically analyze the complexity of feature transformations encoded in DNNs with ReLU layers. We propose metrics to measure three types of complexities of transformations based on the information theory. We further discover and prove the strong correlation between the complexity and the disentanglement of transformations. Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…To this end, we learned four types of ReLU networks on the MNIST dataset via adversarial training. We followed settings in [19] to construct five MLPs, five CNNs, three MLPs with skip connections, and three CNNs with skip connections, respectively. Experimental results show that the average κ over all sixteen networks was 0.097, which verified the correctness of Theorem 3.…”
Section: Explaining the Difficulty Of Adversarial Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…To this end, we learned four types of ReLU networks on the MNIST dataset via adversarial training. We followed settings in [19] to construct five MLPs, five CNNs, three MLPs with skip connections, and three CNNs with skip connections, respectively. Experimental results show that the average κ over all sixteen networks was 0.097, which verified the correctness of Theorem 3.…”
Section: Explaining the Difficulty Of Adversarial Trainingmentioning
confidence: 99%
“…To this end, we learned four types of ReLU networks, including MLPs, CNNs, MLPs with skip connections (namely ResMLP), and CNNs with skip connections (namely ResCNN), on the MNIST dataset [11] via adversarial training. Here, we followed settings in [19] to construct five different MLPs, which consisted of 1, 2, 3, 4, 5 fully-connected (FC) layers, respectively. Each FC layer contained 200 neurons.…”
Section: G Proof Of Theoremmentioning
confidence: 99%