2019
DOI: 10.1016/j.neucom.2019.08.020
|View full text |Cite
|
Sign up to set email alerts
|

Understanding community structure in layered neural networks

Abstract: A layered neural network is now one of the most common choices for the prediction or recognition of high-dimensional practical data sets, where the relationship between input and output data is complex and cannot be represented well by simple conventional models. Its * Email address: watanabe.chihiro@lab.ntt.co.jp our previous methods, by defining the effect of each input dimension on a community, and the effect of a community on each output dimension. We show experimentally that our proposed method can reveal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 30 publications
0
14
0
Order By: Relevance
“…We show the effectiveness of our proposed method experimentally by using three types of data sets, which are the same as those used in [24].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We show the effectiveness of our proposed method experimentally by using three types of data sets, which are the same as those used in [24].…”
Section: Methodsmentioning
confidence: 99%
“…To decompose the function of a trained LNN, we first define a non-negative matrix V = {v k,l }, whose k-th row consists of a feature vector v k of the k-th unit in a hidden layer. Here, we define the feature vector v k by using a method described in a previous study [24] for determining quantitatively the role of each community (or cluster) of units as regards each unit in an LNN. In the previous study, the role of community c is given by a pair of feature vectors v in c = {v in ic } and v out c = {v out cj }, which represent the effect of the i-th input dimension on the community c and the effect of the community c on the j-th output dimension, respectively.…”
Section: Extracting Feature Vectors Of Hidden Layer Units Based On Thmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, we propose defining such a feature vector v k of the k-th hidden layer unit based on its correlations between each input or output dimension. In previous studies [31,29], methods have been proposed for determining the role of a unit or a unit cluster based on the square root error. However, these methods can only provide us with knowledge about the magnitude of the effect of each input dimension on a unit and the effect of a unit on each output dimension, not information about how a hidden layer unit is affected by each input dimension and how each output dimension is affected by a hidden layer unit.…”
Section: Determining Feature Vectors Of Hidden Layer Unitsmentioning
confidence: 99%
“…Although the above studies have made it possible to provide us with an interpretable representation of an LNN function with a fixed resolution (or number of clusters), there is a problem in that we do not know in advance the optimal resolution for interpreting the original network. In the methods described in the previous studies [28,27,30,31,29], the unit clustering results may change greatly with the cluster size setting, and there is no criterion for determining the optimal cluster size. Another problem is that the previous studies could only provide us with information about the magnitude of the relationship between a cluster and each input or output dimension value, and we could not determine whether this relationship was positive or negative.…”
Section: Introductionmentioning
confidence: 99%