2014 IEEE International Conference on Computational Intelligence and Computing Research 2014
DOI: 10.1109/iccic.2014.7238418
|View full text |Cite
|
Sign up to set email alerts
|

Big data analysis using Hadoop cluster

Abstract: In this paper a new set of approaches is present for analyzing dataset which is called as Big data. The big data is most wrongly interpreted term, when one talk about big data in terms of computers it is concept which explains about gathering, organizing, analyzing the data and from these steps to get information from this data. With this approach one is not able to do previously because there had challenges across one or more of the 3V's of bigdata.Firstly volume which means too big data second is verity whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…A block is the minimum amount of data that can read or write. The default size of HDFS blocks are 128Mb.Files stored in the HDFS are split in to multiple blocks called Chunks that are independent of each other; for example, If the file size is 50 Mb then the HDFS blocks takes only 50Mb of memory space within in the default 128 Mb [36,37,38,39,40,41,42].The name node is responsible for storing the Metadata which means it contains all the information about which shelf the data nodes data are stored. It contains the directory and location of data.…”
Section: Hadoop Distributed File System (Hdfs)mentioning
confidence: 99%
“…A block is the minimum amount of data that can read or write. The default size of HDFS blocks are 128Mb.Files stored in the HDFS are split in to multiple blocks called Chunks that are independent of each other; for example, If the file size is 50 Mb then the HDFS blocks takes only 50Mb of memory space within in the default 128 Mb [36,37,38,39,40,41,42].The name node is responsible for storing the Metadata which means it contains all the information about which shelf the data nodes data are stored. It contains the directory and location of data.…”
Section: Hadoop Distributed File System (Hdfs)mentioning
confidence: 99%
“…Ankita Saldi et al [1], focused on statistical data generated in industries. When the generated data is in different formats, environment becomes more challenging to perform functioning.…”
Section: Literature Surveymentioning
confidence: 99%
“…"Big Data is a collection of very large data sets and complex that it becomes difficult to treat using traditional management databases or processing tools application data. The challenges in the areas of capturing, preservation, storage, search, sharing, transfer and analysis, and visualize these [7] data. "The world has been immersed in a sea of data today.…”
Section: Clien Servementioning
confidence: 99%