2015
DOI: 10.1109/tpds.2014.2357019
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Architectures for Learning the RTRN and Elman Dynamic Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(12 citation statements)
references
References 15 publications
0
12
0
Order By: Relevance
“…Other directions of future works include investigation of further optimization strategies for parallizing RBM learning, as well as possibility to use the hybrid environment provided by CPUs and coprocessors. The promising approach to the learning problem which should be considered is using various types of computational intelligence systems [2,11,12,13,16,17,18,19,20,27,41,43,44] including neural networks, in particular convolutional neural networks [28]. Such systems and their parallel implementation on Intel MIC architecture can be useful in many applications, such as face recognition [33,34,35,36].…”
Section: Discussionmentioning
confidence: 99%
“…Other directions of future works include investigation of further optimization strategies for parallizing RBM learning, as well as possibility to use the hybrid environment provided by CPUs and coprocessors. The promising approach to the learning problem which should be considered is using various types of computational intelligence systems [2,11,12,13,16,17,18,19,20,27,41,43,44] including neural networks, in particular convolutional neural networks [28]. Such systems and their parallel implementation on Intel MIC architecture can be useful in many applications, such as face recognition [33,34,35,36].…”
Section: Discussionmentioning
confidence: 99%
“…To efficiently handle large-scale CNN and big data, outstanding distributed architecture based solutions were implemented in [14,20]. Adam [21] is an efficient and scalable deep learning training system, optimizing and balancing workload computation and communication through entire system co-design.…”
Section: Related Workmentioning
confidence: 99%
“…The first group of parameters of the proposed system are the parameters describing differences between the reference signatures and the templates in the partitions. They are used in the construction of fuzzy rules described later (see (9)) and determined as follows: …”
Section: Training Phasementioning
confidence: 99%