2010
DOI: 10.1016/j.neunet.2009.09.002
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of universal approximators incorporating partial monotonicity by structure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 5 publications
0
16
0
Order By: Relevance
“…Also, extra hidden layers of positive parameters can be added to the model. As pointed out by Lang (2005) and Minin et al (2010), an additional hidden layer is required for the MMLP to maintain its universal function approximation capabilities. While multiple hidden layers are included in the software implementation by Cannon (2017), for sake of simplicity, this study only considers the single hidden layer architecture of Zhang and Zhang (1999).…”
Section: Monotone Multi-layer Perceptron (Mmlp)mentioning
confidence: 99%
See 1 more Smart Citation
“…Also, extra hidden layers of positive parameters can be added to the model. As pointed out by Lang (2005) and Minin et al (2010), an additional hidden layer is required for the MMLP to maintain its universal function approximation capabilities. While multiple hidden layers are included in the software implementation by Cannon (2017), for sake of simplicity, this study only considers the single hidden layer architecture of Zhang and Zhang (1999).…”
Section: Monotone Multi-layer Perceptron (Mmlp)mentioning
confidence: 99%
“…These features, which are combined into a single, unified framework, are made possible through a novel combination of elements drawn from the standard QRNN model (White 1992;Taylor 2000;Cannon 2011), the monotone multi-layer perceptron (MMLP) (Zhang and Zhang 1999;Lang 2005;Minin et al 2010), the composite QRNN (CQRNN) , the expectile regression neural network , and the generalized additive neural network (Potts 1999). To the best of the author's knowledge, the MCQRNN model is the first neural networkbased implementation of quantile regression that guarantees non-crossing of regression quantiles.…”
Section: Introductionmentioning
confidence: 99%
“…Archer and Wang [3] propose a monotone model by constraining the neural net weights to be positive. Other methods enforce constraints on model weights [11,46,33,16,2], and force the derivative of the output to be strictly positive [47]. Monotonic networks [40] guarantee monotonicity by constructing a threelayer network using monotonic linear embedding and max-min-pooling.…”
Section: Related Workmentioning
confidence: 99%
“…This method estimates simultaneously multiple non-crossing quantile functions and allows for optional monotonicity, positivity/non-negativity, and additivity constraints. Therefore, it combines elements drawn from the standard QRNN model [5,33,34], the monotone multi-layer perceptron (MMLP) [38,39] and the composite QRNN (CQRNN) [40]. The basis of the MCQRNN is the multi-layer perceptron (MLP) neural network with partial monotonicity constraints [41].…”
Section: Quantile Regression Neural Network (Qrnn)mentioning
confidence: 99%