2015
DOI: 10.1016/j.ins.2015.01.007
|View full text |Cite
|
Sign up to set email alerts
|

Distributed learning for Random Vector Functional-Link networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
64
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
2

Relationship

4
5

Authors

Journals

citations
Cited by 146 publications
(64 citation statements)
references
References 27 publications
0
64
0
Order By: Relevance
“…mainly focus on data analysis major concern with decision mechanism The latest progress based on [14]: Based on the statistical learning tools for big data analysis proposed by Slavakis et al in [14], a lot of new study work has emerged. For example, in [113], two distributed learning algorithms for training random vector functional-link (RVFL) networks through interconnected nodes were presented, where training data were distributed under a decentralized information structure. To tackle the huge-scale convex and nonconvex big data optimization problems, a novel parallel, hybrid random/deterministic decomposition scheme with the power of dictionary learning was investigated in [114].…”
Section: The Latest Research Progressmentioning
confidence: 99%
“…mainly focus on data analysis major concern with decision mechanism The latest progress based on [14]: Based on the statistical learning tools for big data analysis proposed by Slavakis et al in [14], a lot of new study work has emerged. For example, in [113], two distributed learning algorithms for training random vector functional-link (RVFL) networks through interconnected nodes were presented, where training data were distributed under a decentralized information structure. To tackle the huge-scale convex and nonconvex big data optimization problems, a novel parallel, hybrid random/deterministic decomposition scheme with the power of dictionary learning was investigated in [114].…”
Section: The Latest Research Progressmentioning
confidence: 99%
“…The solution to problem (8) cannot be obtained in closed form anymore, however, many efficient methods are available to solve it [33].…”
Section: B Training the Esnmentioning
confidence: 99%
“…Among them, we can cite distributed protocols for support vector machines [7], functional-link networks [8], linear neurons [9], adaptive resonance theory (ART) networks [10], and many others. However, as we argued in [11], what is needed in many contexts is a distributed training algorithm for recurrent neural networks (RNNs).…”
Section: Introductionmentioning
confidence: 99%
“…At this stage, a generic classifier conceived to work in a vector space can easily deal with the classification task. We decided to use a k-NN classifier, but different methods could be adopted, for example, several types of single layer feed-forward networks (e.g., neurofuzzy networks or random vector functional-links [34,38]). As in the case of the Gk-NN classifier, the parameter k is chosen a-priori by the user.…”
Section: Classification By Embedding In a Vector Spacementioning
confidence: 99%