2015
DOI: 10.1016/j.jpdc.2014.08.007
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive, scalable and reliable monitoring of big data on clouds

Abstract: Real-time monitoring of cloud resources is crucial for a variety of tasks such as performance analysis, workload management, capacity planning and fault detection. Applications producing big data make the monitoring task very difficult at high sampling frequencies because of high computational and communication overheads in collecting, storing, and managing information. We present an adaptive algorithm for monitoring big data applications that adapts the intervals of sampling and frequency of updates to data c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(16 citation statements)
references
References 22 publications
0
16
0
Order By: Relevance
“…Real-time monitoring of cloud resources is crucial for a variety of tasks including performance analysis, workload management, capacity planning and fault detection (Andreolini et al 2015). Progress has been made in cloud system monitoring and tracking.…”
Section: Cloud Monitoring and Trackingmentioning
confidence: 99%
See 2 more Smart Citations
“…Real-time monitoring of cloud resources is crucial for a variety of tasks including performance analysis, workload management, capacity planning and fault detection (Andreolini et al 2015). Progress has been made in cloud system monitoring and tracking.…”
Section: Cloud Monitoring and Trackingmentioning
confidence: 99%
“…For example, Yang et al (2015a) investigated the challenges posed by industrial Big Data and complex machine working conditions and proposed a framework for implementing cloud-based, machine health prognostics. To limit computational and communication costs and guarantee high reliability in capturing relevant load changes, Andreolini et al (2015) presented an adaptive algorithm for monitoring Big Data applications that adapts the intervals of sampling and frequency of updates to data characteristics and administrator needs. Bae et al (2014) proposed an intrusive analyzer that detects interesting events (such as task failure) occurring in the Hadoop system.…”
Section: Cloud Monitoring and Trackingmentioning
confidence: 99%
See 1 more Smart Citation
“…Real-time monitoring, gating, filtering, and throttling of streaming data requires new approaches due to the "variety of tasks, such as performance analysis, workload management, capacity planning, and fault detection. Applications producing Big Data make the monitoring task very difficult at high-sampling frequencies because of high computational and communication overheads [13]." • Provisioning and package management activities to support automated deployment and configuration of software and services.…”
Section: System Managementmentioning
confidence: 99%
“…Scholars including computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, and sociologists are clamoring for access to the massive quantities of Big Data produced by and about people, things, and their interactions (Boyd & Crawford, 2012). Research on Big Data related to computer science has focused on system performance and scalability technology, such as virtualization, Hadoop, MapReduce, and security for increases in heterogeneous environments (Andreolini, Colajanni, Pietri, & Tosi, 2015;Kshetri, 2014). Few studies have focused on Big Data for system availability, DR in particular (Clitherow et al, 2008;Serrelis & Alexandris, 2006).…”
Section: Big Data and Disaster Recoverymentioning
confidence: 99%