2022
DOI: 10.1162/neco_a_01486
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of the Representational Power of Random Forests, Binary Decision Diagrams, and Neural Networks

Abstract: In this letter, we compare the representational power of random forests, binary decision diagrams (BDDs), and neural networks in terms of the number of nodes. We assume that an axis-aligned function on a single variable is assigned to each edge in random forests and BDDs, and the activation functions of neural networks are sigmoid, rectified linear unit, or similar functions. Based on existing studies, we show that for any random forest, there exists an equivalent depth-3 neural network with a linear number of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…(23) and n nodes y i , 1 ≤ i ≤ n. The five hidden layers perform the computation of Eqs. (21) and (22). The second layer keeps a copy of 2d nodes x j and performs the computation of δ(x j , x k ) with 4d 2 nodes ψ 1 kj , ψ 2 kj , ρ 1 kj , ρ 2 kj , 1 ≤ j, k ≤ d based on Prop.…”
Section: Appendix Proofs and Examplesmentioning
confidence: 99%
See 2 more Smart Citations
“…(23) and n nodes y i , 1 ≤ i ≤ n. The five hidden layers perform the computation of Eqs. (21) and (22). The second layer keeps a copy of 2d nodes x j and performs the computation of δ(x j , x k ) with 4d 2 nodes ψ 1 kj , ψ 2 kj , ρ 1 kj , ρ 2 kj , 1 ≤ j, k ≤ d based on Prop.…”
Section: Appendix Proofs and Examplesmentioning
confidence: 99%
“…Bengio et al [10] and Biau et al [11] proved that decision trees and random forests can be efficiently expressed as neural networks by using sigmoidal functions, Heaviside functions, and hyperbolic tangent functions as activation functions. Later on, Kumano and Akutsu [22], extended the result for the neural networks with rectified linear unit (ReLU) and other related activation functions.…”
Section: Introductionmentioning
confidence: 97%
See 1 more Smart Citation
“…The osteosarcoma gene sequencing data has a small amount of data, but its dimensionality is relatively high. The output data after initial feature selection by random forest [ 20 ], followed by deeper feature extraction using a pretrained E-CNN model, satisfies the characteristics of small data volume and low dimensionality described in the previous section. Therefore, this paper uses SVM to replace the Sigmoid activation function of the convolutional neural network as the final classifier, which can further improve the generalization ability and stability of the model.…”
Section: Introductionmentioning
confidence: 99%
“…e output data after initial feature selection by random forest [20], followed by deeper feature extraction using a pretrained E-CNN model, satisfies the characteristics of small data volume and low dimensionality described in the previous section.…”
Section: Introductionmentioning
confidence: 99%