Proceedings of the 3rd International Conference on Networking, Information Systems &Amp; Security 2020
DOI: 10.1145/3386723.3387826
|View full text |Cite
|
Sign up to set email alerts
|

Big Data Solutions Proposed for Cluster Computing Systems Challenges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0
1

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 13 publications
0
14
0
1
Order By: Relevance
“…time-consumption issue prevents the trained deep learning models from speedily get more accurate information and perform the required tasks. To curb this execution time problem, we have been applied the Hadoop framework [56] to our proposed approach, which is a useful framework that serves to improve the forecasting effectiveness and scalability of our proposed fuzzy deep learning model. The Hadoop platform parallelizes our FDLC between multiple computing nodes.…”
Section: ) Center Of Sums (Cos) Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…time-consumption issue prevents the trained deep learning models from speedily get more accurate information and perform the required tasks. To curb this execution time problem, we have been applied the Hadoop framework [56] to our proposed approach, which is a useful framework that serves to improve the forecasting effectiveness and scalability of our proposed fuzzy deep learning model. The Hadoop platform parallelizes our FDLC between multiple computing nodes.…”
Section: ) Center Of Sums (Cos) Methodsmentioning
confidence: 99%
“…Besides, the Hadoop framework is implemented in our work, which parallelizes the learning word embedding tasks between five machines; one master node and four slave nodes. The Hadoop framework uses its HDFS for stocking the dataset to be embedding and the set of representation vectors (the obtained result by applying the word embedding method), and MapReduce programming framework for processing and treating our work [56]. The Hadoop framework's primary goal is to parallelize our embedding process multiple machines to improve the AC and reduce the TC.…”
Section: A Experimentsmentioning
confidence: 99%
“…Consequently, both the time and space effectiveness of conventional machine learning algorithms diminish dramatically when handling Big Data. To remedy these challenges in this study, we have applied the Big Data Hadoop framework [102] as depicted in Fig. 8 that represents the implementation of our proposal using Hadoop framework with its distributed file system and mapreduce programming model.…”
Section: J Parallelization Of Our Proposed Approachmentioning
confidence: 99%
“…Up to now, it is a highly fault-tolerant storage system, which stores huge amounts of data reliably on multiple low-cost machines redundantly. Thus rescue the system from eventual subsequent data losses in case of failure [5,6]. The input data of a Hadoop job are stored as files in HDFS.…”
Section: Introductionmentioning
confidence: 99%
“…The user specifies a map function that processes a set of input key/value pairs in order to generate a set of intermediate key/value pairs, finally, the reduce function merges all intermediate values associated with the same intermediate key. Programmes written in this functional style are automatically parallelised and executed on a large cluster of commodity computers [6,7].…”
Section: Introductionmentioning
confidence: 99%