2018
DOI: 10.1016/j.knosys.2017.10.006
|View full text |Cite
|
Sign up to set email alerts
|

A Siamese Deep Forest

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 54 publications
(22 citation statements)
references
References 27 publications
0
22
0
Order By: Relevance
“…for j = c, which implicates that P c,j , 1 − P c,j , and p(t c,j ) for j = c, j = c, and all j, respectively, are all O(exp(−1/σ 2 j ) as σ j → 0. Applying these on (34) and (35), it proves (27)- (29). Meanwhile, (26) is proved by observing |ρ(ŷ j ) − ρ(ŷ j )| ≤ |ŷ j −ŷ j | and the expectation of |ŷ−ŷ |, provided that x,x ∈ C c , is 2 j σ 2 j .…”
Section: B Layer-wise Intra/interclass Distance Dynamics In Danmentioning
confidence: 71%
See 3 more Smart Citations
“…for j = c, which implicates that P c,j , 1 − P c,j , and p(t c,j ) for j = c, j = c, and all j, respectively, are all O(exp(−1/σ 2 j ) as σ j → 0. Applying these on (34) and (35), it proves (27)- (29). Meanwhile, (26) is proved by observing |ρ(ŷ j ) − ρ(ŷ j )| ≤ |ŷ j −ŷ j | and the expectation of |ŷ−ŷ |, provided that x,x ∈ C c , is 2 j σ 2 j .…”
Section: B Layer-wise Intra/interclass Distance Dynamics In Danmentioning
confidence: 71%
“…To assure high degree of diversity leading to performance gain in consequence, each ensemble is instantiated with RFs of different types. Other DF-driven S-DNNs include [33] and [34]. However, the performance evaluation of these DF networks is limited to only handwritten digit recognition.…”
Section: A Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Challenges such as local optimality or gradient vanishing also accompany this advantage, and regularizations similar to early stopping, weight decay, dropout [141] and batch normalization [142] remain to be discovered. In addition to networks, deep structures have recently been extended to ensembles of decision trees, such as gcForest [143] and related variants [144,145]. It can be asserted that deep learning aggregated with other learning frameworks will be promising, and deeper models not limited to networks will emerge in the coming decades to satisfy the need to fit complex datasets.…”
Section: Deep Models Short Models and Ensemble Modelsmentioning
confidence: 99%