2015
DOI: 10.1007/s00500-015-1599-3
|View full text |Cite
|
Sign up to set email alerts
|

Parallel implementation of multilayered neural networks based on Map-Reduce on cloud computing clusters

Abstract: To meet the requirements of big data processing, this paper presents an efficient mapping scheme for a fully connected multilayered neural network, which is trained by using back-propagation (BP) algorithm based on MapReduce of cloud computing clusters. The batch-training (or epoch-training) regimes are used by effective segmentation of samples on the clusters, and are adopted in the separated training method, weight summary to achieve convergence by iterating. For a parallel BP algorithm on the clusters and a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 40 publications
0
6
0
Order By: Relevance
“…The proposed architecture is based on equations (11), (12) and (14) and is organized into three separate phases, shown in Figure 5. The time required to execute each phase determines the performance increase that can be achieved by proper pipeline.…”
Section: Hardware Only Configurationmentioning
confidence: 99%
See 1 more Smart Citation
“…The proposed architecture is based on equations (11), (12) and (14) and is organized into three separate phases, shown in Figure 5. The time required to execute each phase determines the performance increase that can be achieved by proper pipeline.…”
Section: Hardware Only Configurationmentioning
confidence: 99%
“…Accelerator Architecture used for the calculations. The proposed design uses fixed-point arithmetic for all linear functions (add, mul, cmp) and single-precision floating-point for non-linear functions, like log and exp in(12). To specify the range of the fixed-point arithmetic, simulations were run and statistics were collected based on all available training and test data patterns.…”
mentioning
confidence: 99%
“…It was found that among all, MapReduce performs the worst, with a slight improvement in the case of Haloop, and Spark performs the best due to in‐memory computation. Similarly, Zhang and Xiao have implemented a parallel multilayered neural network employing a BP training algorithm on MapReduce (MRBP) on cloud computing clusters for speedup.…”
Section: Big Data Inference Enginementioning
confidence: 99%
“…This has always been one of the ultimate goals of artificial intelligence, which is one of critical components to structure the intelligent interconnections of the Internet of Things. Intelligent voice interaction technology has involuntarily become one of the current research hotspots [4] [5] [6] [7] [8].…”
Section: Introductionmentioning
confidence: 99%