2020
DOI: 10.1016/j.neucom.2020.06.117
|View full text |Cite
|
Sign up to set email alerts
|

Deep neural network to extract high-level features and labels in multi-label classification problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 34 publications
(11 citation statements)
references
References 40 publications
0
9
0
Order By: Relevance
“…Table 2 displays the results of our measure in the model proposed by [1]. These tables report the number of high-level features, the reduction percentage those high-level features represent (%Red-Features), the number of high-level labels, the reduction percentage in the number of labels (%Red-Labels), the accuracy obtained when using only the high-level features and labels, and the accuracy loss with respect the model using all features and labels (i.e.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Table 2 displays the results of our measure in the model proposed by [1]. These tables report the number of high-level features, the reduction percentage those high-level features represent (%Red-Features), the number of high-level labels, the reduction percentage in the number of labels (%Red-Labels), the accuracy obtained when using only the high-level features and labels, and the accuracy loss with respect the model using all features and labels (i.e.…”
Section: Resultsmentioning
confidence: 99%
“…Recently, in [1] the authors introduced a new bidirectional network architecture that is composed of stacked association-based pooling layers to extract high-level features and labels in MLC problems. This approach, unlike the classic use of pooling, does not pool pixels but problem features or labels.…”
Section: Bidirectional Deep Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…The current progress of Deep Learning (DL) depends on skills including initial weight selection, local receptive fields, weight sharing, etc., when using deeper networks (such as> 100), there still has to face the traditional difficulties of disappearing gradients during backpropagation: degradation problem [33]. The more layers, the higher the training error-rate and the test error rate.…”
Section: Deep Residual Networkmentioning
confidence: 99%