2020
DOI: 10.3390/e22121365
|View full text |Cite
|
Sign up to set email alerts
|

Semiotic Aggregation in Deep Learning

Abstract: Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Usi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(24 citation statements)
references
References 31 publications
2
22
0
Order By: Relevance
“…These heat maps can help in the interpretation of the algorithm’s classification results, e.g., as shown in Lee et al [ 47 ]. An example of a heat map (Grad-CAM method) is shown in Figure 5 [ 48 , 49 ]. By using these maps, it can be assessed which scans the algorithm classifies correctly as positives for the wrong reasons, and which scans are missed by the algorithm, which can also be used as indications for possible biases [ 50 ].…”
Section: Resultsmentioning
confidence: 99%
“…These heat maps can help in the interpretation of the algorithm’s classification results, e.g., as shown in Lee et al [ 47 ]. An example of a heat map (Grad-CAM method) is shown in Figure 5 [ 48 , 49 ]. By using these maps, it can be assessed which scans the algorithm classifies correctly as positives for the wrong reasons, and which scans are missed by the algorithm, which can also be used as indications for possible biases [ 50 ].…”
Section: Resultsmentioning
confidence: 99%
“…Therefore it is interesting to observe if any form of superization is present in the training process of a CNN. In [19] we applied the above theorem to the neural layers of CNNs. We computed superization with respect to the spatial entropy variations of the saliency maps.…”
Section: Semiotic Superization In Cnnsmentioning
confidence: 99%
“…The field of interpretability/explainability in deep learning has witnessed an explosion of published papers in recent years. Even if there is no fundamental theory that can elucidate all underlying mechanisms present in those networks, multiple works tried to deal with this issue by coming up with partial solutions, either by visual explanations [22], [18] or theoretical insights [14], [9], [19]. Therefore, we can say that the black box interpretation of deep learning is not true anymore, and what we need are better techniques to interpret these models.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite the ability of generating, especially in image recognition tasks, human-alike predictions, CNNs still lack a major component: interpretability [ 34 ]. Neural networks in general are known for their black-box type of behavior, hiding the inner working mechanisms of reasoning.…”
Section: Computing the Te Feedback In A Cnnmentioning
confidence: 99%