2019
DOI: 10.48550/arxiv.1912.04783
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Frivolous Units: Wider Networks Are Not Really That Wide

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
1
2
0
Order By: Relevance
“…It is worth mentioning that wide natural models exhibit a larger number of redundant feature maps compared to their thinner counterparts. This is consistent with the results in [48], where the existence of so-called redundant units is proved and leveraged to explain implicit regularization in wide (natural) models. Our findings suggest a novel direction for investigation about the mechanism through which local robustness may be implemented by adversarially-trained CNNs, namely a coupling between feature maps.…”
Section: Feature Maps Redundancysupporting
confidence: 90%
“…It is worth mentioning that wide natural models exhibit a larger number of redundant feature maps compared to their thinner counterparts. This is consistent with the results in [48], where the existence of so-called redundant units is proved and leveraged to explain implicit regularization in wide (natural) models. Our findings suggest a novel direction for investigation about the mechanism through which local robustness may be implemented by adversarially-trained CNNs, namely a coupling between feature maps.…”
Section: Feature Maps Redundancysupporting
confidence: 90%
“…And as discussed in Section IV-C, networks can also be interpreted by partitioning them into modules and studying each separately. Furthermore, "frivolous" neurons [41] are compressible and include sets of redundant neurons which can be interpreted as modules and can often be merged by weight refactorization. And finally, compression can guide interpretations [152], and interpretations can guide compression [266] (see Section III-G on frivolous neurons and compression).…”
Section: Discussionmentioning
confidence: 99%
“…Frivolous neurons are not important to a network. [41] define and detect two distinct types in DNNs: prunable neurons which can be removed from a network by ablation, and redundant neurons which can be removed by refactoring layers. They pose a challenge for interpretability because a frivolous neuron's contribution to the network may either be meaningless or difficult to detect with certain methods (e.g.…”
Section: G Frivolous Neurons (Hazard)mentioning
confidence: 99%