2015
DOI: 10.11568/kjm.2015.23.3.327
|View full text |Cite
|
Sign up to set email alerts
|

Constructive Approximation by Neural Networks With Positive Integer Weights

Abstract: Abstract. In this paper, we study a constructive approximation by neural networks with positive integer weights. Like neural networks with real weights, we show that neural networks with positive integer weights can even approximate arbitrarily well for any continuous functions on compact subsets of R. We give a numerical result to justify our theoretical result.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2016
2016
2017
2017

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…As mentioned above single hidden layer networks with a nonpolynomial activation function possess this property. There are several results (see, e.g., [8,14,26,32]) showing that a single hidden layer perceptron with reasonably restricted set of weights still retains the universal approximation property. But if weights are taken from too "narrow" sets, then the universal approximation property is generally violated and there arisen the problem of identification of compact sets X ⊂ R d such that the considered network approximates arbitrarily well any given continuous function on X.…”
Section: Introductionmentioning
confidence: 99%
“…As mentioned above single hidden layer networks with a nonpolynomial activation function possess this property. There are several results (see, e.g., [8,14,26,32]) showing that a single hidden layer perceptron with reasonably restricted set of weights still retains the universal approximation property. But if weights are taken from too "narrow" sets, then the universal approximation property is generally violated and there arisen the problem of identification of compact sets X ⊂ R d such that the considered network approximates arbitrarily well any given continuous function on X.…”
Section: Introductionmentioning
confidence: 99%
“…where σ : R → R is an activation function and a i , b i , c i ∈ R. Hahm and Hong [3] investigated the neural network approximation to a continuous function on R. They also suggested a constructive approximation by neural networks with a sigmoidal function using the convolution method in [5].…”
Section: Introductionmentioning
confidence: 99%
“…Medvedeva suggested a density result by neural network with a sigmoidal function using Taylor's theorem in [8]. In order to explore the approximation by neural networks, we use the notations appeared in [2] and [5]. For n ∈ N and a sigmoidal function σ, we define Ψ n,σ by…”
Section: Introductionmentioning
confidence: 99%