2001
DOI: 10.1002/1099-1115(200102)15:1<37::aid-acs626>3.0.co;2-7
|View full text |Cite
|
Sign up to set email alerts
|

PAC learning in non-linear FIR models

Abstract: The PAC learning theory creates a framework to assess the learning properties of static models for which the data are assumed to be independently and identically distributed (i.i.d.). The present paper first extends the idea of PAC learning to cover the learning of modelling tasks with m‐dependent data, and then applies the resulting framework to evaluate learning of non‐linear FIR models. Also, the learning properties of FIR modelling with radial basis function networks are further specified. These results in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2001
2001
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…SVM was preferred as the general machine learning framework for classification over structures such as neural networks and radial basis function networks, primarily because of studies that have shown that when limited amount of training data is available, neural networks [22] and radial basis functions [23] may not provide desirable generalization performance and may overfit the data.…”
Section: Methodsmentioning
confidence: 99%
“…SVM was preferred as the general machine learning framework for classification over structures such as neural networks and radial basis function networks, primarily because of studies that have shown that when limited amount of training data is available, neural networks [22] and radial basis functions [23] may not provide desirable generalization performance and may overfit the data.…”
Section: Methodsmentioning
confidence: 99%
“…To extend the PAC learning theory to a learning scheme for m-dependent sequences with any distribution, we start from an inequality that bounds the summation of a sequence of m-dependent r.v.s. and was introduced in Najarian et al [7].…”
Section: Extension Of Pac Learning To M-dependent Casesmentioning
confidence: 99%
“…The proof of Theorem III.1 given in Najarian et al [7] is inspired by the proof of the central limit theorem for a sequence of dependent data by Iosifescu and coworkers [9]. Notice that if there exists no integer k such that n ϭ k(m ϩ 1), by defining k as k Ϫ [n/(m ϩ 1)], a similar approach can be followed to extend the result of the theorem to such cases.…”
Section: Extension Of Pac Learning To M-dependent Casesmentioning
confidence: 99%
See 1 more Smart Citation
“…A fixed-distribution extension of the PAC learning theory to included FIR models was given in (K. Najarian, Guy A. Dumont, and Michael S. Davies, 2001). The general results of this extension were summarized in a thorem that is briefly described here (for more details see: (K. Najarian, Guy A. Dumont, and Michael S. Davies, 2001) ):…”
Section: Extension Of Pac Learning To M-dependent Casesmentioning
confidence: 99%