IEEE 18th International Conference on Intelligent Engineering Systems INES 2014 2014
DOI: 10.1109/ines.2014.6909379
|View full text |Cite
|
Sign up to set email alerts
|

On practical constraints of approximation using neural networks on current digital computers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…A number of input neurons in each MLP will be the same as a number of preceding concepts and there will be single output, as there is only one single concept which is being influenced. A topology of the deployed MLPs is to be considered, but generally even small and simple 2-hidden-layer topologies, with less than 5 neurons in each hidden layer, should perform better than the conventional linear FCM relations as their approximation capability is greater [13]. It is also possible to use MLPs with no hidden layer.…”
Section: A Nonlinear Relations Represented By Mlpsmentioning
confidence: 99%
See 1 more Smart Citation
“…A number of input neurons in each MLP will be the same as a number of preceding concepts and there will be single output, as there is only one single concept which is being influenced. A topology of the deployed MLPs is to be considered, but generally even small and simple 2-hidden-layer topologies, with less than 5 neurons in each hidden layer, should perform better than the conventional linear FCM relations as their approximation capability is greater [13]. It is also possible to use MLPs with no hidden layer.…”
Section: A Nonlinear Relations Represented By Mlpsmentioning
confidence: 99%
“…The replacement of the linear relations with the MLPs naturally leads to an increased computational complexity of the resulting FCM. This is obvious, as the increased approximation capability cannot be achieved without an increase in the model complexity [13]. The increase in the complexity is equal to the sum of the computational complexities of all used MLPs and grows exponentially with the size of the used MLPs.…”
Section: A Nonlinear Relations Represented By Mlpsmentioning
confidence: 99%
“…Therefore, φ θ1,1,z (x; F p,q ) = φ θ1,2,z (x; F p,q ) = 0, (20) which leads to f 1 (x; F p,q ) = 0. If 2 −p+emin ≤ x ≤ Ω, we have…”
Section: Representability Test For Case 1-3 (1)mentioning
confidence: 99%
“…Another line of work on memory capacity has studied regression problems and showed that shallow networks with O(n) parameters can fit arbitrary n input/output pairs [4,11,23,26], while o(n) parameters are sufficient for deep ones, i.e., ω(1) layers [17,22]. However, since these results assume exact mathematical operations, they do not apply to neural networks executed on computers that can only represent a tiny subset of the reals (e.g., floating-point numbers) and perform inexact operations (e.g., floating-point operations) [24,20].…”
Section: Introductionmentioning
confidence: 99%