2013
DOI: 10.1007/978-3-642-40725-3_19
|View full text |Cite
|
Sign up to set email alerts
|

Sliding Hidden Markov Model for Evaluating Discrete Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…Discretized MMPPs (or hidden Markov models) replicate the burstiness of TCP packet traces, which can be clustered in groups, and, hence, allow model parameters to converge on multiple traces simultaneously at reduced computational complexity [10]. Further, arrival parameters of queueing models can be updated incrementally via online EM learning algorithms [2,6,14], which are suitable for live systems.…”
Section: Resultsmentioning
confidence: 99%
“…Discretized MMPPs (or hidden Markov models) replicate the burstiness of TCP packet traces, which can be clustered in groups, and, hence, allow model parameters to converge on multiple traces simultaneously at reduced computational complexity [10]. Further, arrival parameters of queueing models can be updated incrementally via online EM learning algorithms [2,6,14], which are suitable for live systems.…”
Section: Resultsmentioning
confidence: 99%
“…Lube is a geo-distributed framework that reduces the query response times by detecting bottlenecks at runtime [42]. Lube monitors the performance metrics (CPU, memory, network and disk) in real-time and uses Autoregressive Integrated Moving Average (ARIMA) [59] or the Sliding Hidden Markov Model (SlidHMM) [60] to detect resource bottleneck at runtime. The scheduling algorithm considers data locality and bottleneck severity to assign tasks to worker nodes, the late-binding algorithm in Sparrow [61] is used to avoid false positives when detecting bottlenecks by holding a task for a short time before submitting it to a worker node.…”
Section: Spark-based Frameworkmentioning
confidence: 99%
“…The model is trained on observations until parameter convergence (A, B, ζ, χ, π become fixed). Since the HHMM parameters are trained during runtime on continuously updated response times of components, we use the sliding window technique [28] to read the observations with window size = 10-100 observations, which generates multiple windows with different observations. The obtained observations are used to feed the HHMM.…”
mentioning
confidence: 99%