2020
DOI: 10.11591/ijeecs.v20.i3.pp1584-1590
|View full text |Cite
|
Sign up to set email alerts
|

Best neural simultaneous approximation

Abstract: For many years, approximation concepts has been investigated in view of neural networks for the several applications of the two topics. Researchers studied simultaneous approximation in the 2-normed space and proved essential theorems concern with existence, uniqueness and degree of best approximation. Here, we define a new 2-norm in -space, with  so we call it  quasi 2- normed space ( ). The set of approximations is a space of feedforward neural networks that is constructed in this paper. Existence and unique… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…Calculations have shown that in the short term (from the startup implementation stage), graphs 1, 2, and 3 coincide with an accuracy of δ = 10 -3 . Thus, the practical recommendations for using Volterra network modeling obtained in relation to the model (16)(17)(18)(19) can be applied in studying the effectiveness of innovative startup projects (sups) with a turbulent environment.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…Calculations have shown that in the short term (from the startup implementation stage), graphs 1, 2, and 3 coincide with an accuracy of δ = 10 -3 . Thus, the practical recommendations for using Volterra network modeling obtained in relation to the model (16)(17)(18)(19) can be applied in studying the effectiveness of innovative startup projects (sups) with a turbulent environment.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…These dense layers [10], [11], on the other hand, can only learn patterns that fall within their input feature space, while each convolution filter is capable of learning patterns locally (kernel space or region of interest) [12]- [14]. This implies the possibility of fragmenting input images into edges and textures foreasy learning; they are also more useful for classification rather than global patterns [15].…”
Section: Methods 21 Dense Layersmentioning
confidence: 99%
“…In [6,7,10,13] studied best trigonometric approximation using continuous function. In [8,9,11] studied the approximation using many types of neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…, 𝑇 𝑛,𝑠 𝐶 2 𝑒 −𝐶 3 𝑛 12Eman S. Bhaya and Sara Saleh Mahdi , Journal of Al-Qadisiyah for Computer Science and Mathematics Vol.Thus,(7) leads to a network 𝑉(𝑓) with 𝑛 𝑠+1 neurons such that: ‖𝑓(𝑥) − 𝑉(𝑓, 𝑥)‖ 𝑝 ≤ 𝐶𝑛 −𝑠 ∑ ‖𝐷 𝑗 𝑓(𝑥)‖ 𝑝…”
mentioning
confidence: 99%