2016
DOI: 10.1080/13658816.2015.1131830
|View full text |Cite
|
Sign up to set email alerts
|

A spatiotemporal indexing approach for efficient processing of big array-based climate data with MapReduce

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 56 publications
(35 citation statements)
references
References 14 publications
0
35
0
Order By: Relevance
“…Krishnan et al investigated the use of MapReduce to generate DEM by gridding the LIDAR data [22]. Li et al utilized Hadoop MapReduce to enable penalization of big climate data processing [11,13]. Besides these problem-specific approaches focusing on solving specific problems with Hadoop, tools have also been developed to handle general geospatial data processing tasks and are being adopted in GIScience communities.…”
Section: Hadoop For Geospatial Data Processingmentioning
confidence: 99%
See 2 more Smart Citations
“…Krishnan et al investigated the use of MapReduce to generate DEM by gridding the LIDAR data [22]. Li et al utilized Hadoop MapReduce to enable penalization of big climate data processing [11,13]. Besides these problem-specific approaches focusing on solving specific problems with Hadoop, tools have also been developed to handle general geospatial data processing tasks and are being adopted in GIScience communities.…”
Section: Hadoop For Geospatial Data Processingmentioning
confidence: 99%
“…Hadoop, a distributed computing platform leveraging commodity computers, is gaining increasing popularity in geoscience communities, as reviewed in Section 2. While a lot of effort was put into investigating how to adapt Hadoop for processing big geospatial data (e.g., [9][10][11][12][13]), how to efficiently handle different geoprocessing workload by dynamically adjusting the amount of computing resources (number of nodes of a Hadoop cluster) was barely explored. The ability to dynamically adjust the computing resources is important because the processing workload of operational geospatial applications is rather dynamic than static [14]; for example, the data processing workload for an emergency response system (such as for wildfires, tsunami, and earthquakes) peaks during the emergency event, which requires adequate computing power to respond promptly [14,15].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Hadoop has achieved an unprecedented success in implementing many real-world distributed tasks. There are some researches focus on spatial data storage and query under Hadoop framework, such as SpatialHadoop (Eldawy & Mokbel, 2015), Hadoop-GIS (Aji et al, 2013), SciHive (Geng, Huang, Zhu, Ruan, & Yang, 2013), GeoMesa (Fox, Eichelberger, Hughes, & Lyon, 2013), Terry Fly (Cary, Yesha, Adjouadi, & Rishe, 2010), and GeoBase (Li, Hu, Schnase, & Duffy, 2016), and so on. However, the coupling among MapReduce model and HDFS is high.…”
Section: Parallel and Distributed Spatial Data Indexingmentioning
confidence: 99%
“…• Based on the good scalability, large-scale concurrent processing capability and MapReduce parallel model of NoSQL databases, some preliminary research has been made toward exploring spatial data distributed storage and processing [20][21][22][23][24].…”
Section: Not Only Sql (Nosql)-enabled Big Data Managementmentioning
confidence: 99%