2020
DOI: 10.24996/ijs.2020.61.4.20
|View full text |Cite
|
Sign up to set email alerts
|

Monotone Approximation by Quadratic Neural Network of Functions in Lp Spaces for p<1

Abstract: Some researchers are interested in using the flexible and applicable properties of quadratic functions as activation functions for FNNs. We study the essential approximation rate of any Lebesgue-integrable monotone function by a neural network of quadratic activation functions. The simultaneous degree of essential approximation is also studied. Both estimates are proved to be within the second order of modulus of smoothness.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…64, No. 1, pp: 294-303 Many versions of moduli of smoothness were defined later, for more details see [5], [6], [7] , [8], [9], [10] , [11] , and [12] . In 2007, Jianjun [13] defined the weighted modulus in terms of the classical Jacobi weights πœ” 𝛼,𝛽 (π‘₯) = (1 βˆ’ π‘₯) 𝛼 (1 + π‘₯) 𝛽 , (1.1) where ∈ [βˆ’1,1] , 𝛼 and 𝛽 ∈ 𝐽 𝑝 , 𝐽 𝑝 is given by…”
Section: Issn: 0067-2904mentioning
confidence: 99%
“…64, No. 1, pp: 294-303 Many versions of moduli of smoothness were defined later, for more details see [5], [6], [7] , [8], [9], [10] , [11] , and [12] . In 2007, Jianjun [13] defined the weighted modulus in terms of the classical Jacobi weights πœ” 𝛼,𝛽 (π‘₯) = (1 βˆ’ π‘₯) 𝛼 (1 + π‘₯) 𝛽 , (1.1) where ∈ [βˆ’1,1] , 𝛼 and 𝛽 ∈ 𝐽 𝑝 , 𝐽 𝑝 is given by…”
Section: Issn: 0067-2904mentioning
confidence: 99%