Markov chains constitute a common way of modelling the progression of a chronic disease through various severity states. For these models, a transition matrix with the probabilities of moving from one state to another for a specific time interval is usually estimated from cohort data. Quite often, however, the cohort is observed at specific times with intervals that may be greater than the interval of interest. The transition matrix computed then needs to be decomposed in order to estimate the desired interval transition matrix suited to the model. Although simple to implement, this method of matrix decomposition can yet result in an invalid short-interval transition matrix with negative or complex entries. In this paper, we present a method for computing short-interval transition matrices that is based on regularization techniques. Our method operates separately on each row of the invalid short-interval transition matrix aiming to minimize an appropriate distance measure. We test our method on various matrix structures and sizes, and evaluate its performance on a real-life transition model for HIV-infected individuals.
Diagnosing ventilator-associated pneumonia in mechanically ventilated patients in intensive care units is currently seen as a clinical challenge. The difficulty in diagnosing ventilator-associated pneumonia stems from the lack of a simple yet accurate diagnostic test. To assist clinicians in diagnosing and treating patients with pneumonia, a decision-theoretic network was designed with the help of domain experts. A major limitation of this network is its inability to represent pneumonia as a dynamic process that progresses over time. In this paper, we construct a dynamic Bayesian network that explicitly captures the development of the disease through time. We discuss how probability elicitation from domain experts serves to quantify the dynamics involved and show how the nature of patient data helps reduce the computational burden of inference. We evaluate the diagnostic performance of our dynamic model and report promising results.
Sequential statistical models such as dynamic Bayesian networks and hidden Markov models more specifically, model stochastic processes over time. In this paper, we study for these models the effect of consecutive similar observations on the posterior probability distribution of the represented process. We show that, given such observations, the posterior distribution converges to a limit distribution. Building upon the rate of the convergence, we further show that, given some wished-for level of accuracy, part of the inference can be forestalled. To evaluate our theoretical results, we study their implications for a real-life model from the medical domain and for a benchmark model for agricultural purposes. Our results indicate that whenever consecutive similar observations arise, the computational requirements of inference in Markovian models can be drastically reduced.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.