Stochastic control problems that arise in reliability and maintenance optimization typically assume that information used for decision-making is obtained according to a predetermined sampling schedule. In many real applications, however, there is a high sampling cost associated with collecting such data. It is therefore of equal importance to determine when information should be collected and to decide how this information should be utilized for maintenance decision-making. This type of joint optimization has been a long-standing problem in the operations research and maintenance optimization literature, and very few results regarding the structure of the optimal sampling and maintenance policy have been published. In this paper, we formulate and analyze the joint optimization of sampling and maintenance decision-making in the partially observable Markov decision process framework. We prove the optimality of a policy that is characterized by three critical thresholds, which have practical interpretation and give new insight into the value of condition-based maintenance programs in life-cycle asset management. Illustrative numerical comparisons are provided that show substantial cost savings over existing suboptimal policies.
In this paper, we present a parameter estimation procedure for a condition-based maintenance model under partial observations. Systems can be in a healthy or unhealthy operational state, or in a failure state. System deterioration is driven by a continuous time homogeneous Markov chain and the system state is unobservable, except the failure state. Vector information that is stochastically related to the system state is obtained through condition monitoring at equidistant sampling times. Two types of data histories are available -data histories that end with observable failure, and censored data histories that end when the system has been suspended from operation but has not failed. The state and observation processes are modeled in the hidden Markov framework and the model parameters are estimated using the expectation-maximization algorithm. We show that both the pseudolikelihood function and the parameter updates in each iteration of the expectation-maximization algorithm have explicit formulas. A numerical example is developed using real multivariate spectrometric oil data coming from the failing transmission units of 240-ton heavy hauler trucks used in the Athabasca oil sands of Alberta, Canada. Bus. Ind. 2013, 29 279-294
Appl. Stochastic Models
Model assumptionsWe assume that a technical system's condition can be categorized into one of three states: a healthy or 'good as new' state (state 0), an unhealthy or deteriorated state (state 1), and a failure state (state 2). In many real world applications the state of an operational system is unobservable, and only the failure state is observable. For example, the state of an operational transmission unit in a heavy hauler truck cannot be observed without full system inspection, which is typically quite costly.Appl. Stochastic Models Bus. Ind. 2013, 29 279-294 As detailed in Section 2, to satisfy the assumption of independence and normality, we first need to fit a model that accounts for autocorrelation in the data histories, and choose as the observation process, in the hidden Markov model, the residuals of the fitted model. Before fitting a model to the data histories, we have to approximate the healthy portions of the
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.