Purpose Maintenance management is a vital strategic task given the increasing demand on sustained availability of machines. Machine performance depends primarily on frequency and downtime; therefore, ranking critical machines based on these two criteria is important to determine the appropriate maintenance strategy. The purpose of this paper is to compare two methods, using case studies, to allocate maintenance strategies while prioritising performance based on frequency and downtime or Mean Time to Repair: the Decision Making Grid (DMG) and Jack-Knife Diagram (JKD). Design/methodology/approach The literature indicates the need for an approach able to integrate maintenance performance and strategy in order to adapt existing data on equipment failures and to routinely adjust preventive measures. Maintenance strategies are incomparable; one strategy should not be applied to all machines, nor all strategies to the same machine. Findings Compared to the Pareto histogram, the DMG and JKD provide visual representations of the performance of the worst machines with respect to frequency and downtime, thus allowing maintenance technicians to apply the appropriate maintenance strategy. Each method has its own merits. Research limitations/implications This work compares only two methods based on their original conceptualisation. This is due to their similarities in using same input data and their main features. However, there is a scope to compare to other methods or variations of these methods. Practical implications This paper highlights how the DMG and JKD can be incorporated in industrial applications to allocate appropriate maintenance strategy and track machine performance over time. Originality/value Neither DMG nor JKD have been compared in the literature. Currently, the JKD has been used to rank machines, and the DMG has been used to determine maintenance strategies.
This paper presents a methodology to provide the cumulative failure distribution (CDF) for degrading, uncertain, and dynamic systems. The uniqueness and novelty of the methodology is that long service time over which degradation occurs has been augmented with much shorter cycle time over which there is uncertainty in the system dynamics due to uncertain design variables. The significance of the proposed methodology is that it sets the foundation for setting realistic life-cycle management policies for dynamic systems. The methodology first replaces the implicit mechanistic model with a simple explicit meta-model with the help of design of experiments and singular value decomposition, then transforms the dynamic, time variant, probabilistic problem into a sequence of time invariant steady-state probability problems using cycle-time performance measures and discrete service time, and finally, builds the CDF as the summation of the incremental service-time failure probabilities over the planned life time. For multiple failure modes and multiple discrete service times, set theory establishes a sequence of true incremental failure regions. A practical implementation of the theory requires only two contiguous service-times. Probabilities may be evaluated by any convenient method, such as Monte Carlo and the first-order reliability method. Error analysis provides ways to control errors with regards to probability calculations and meta-model fitting. A case study of a common servo-control mechanism shows that the new methodology is sufficiently fast for design purposes and sufficiently accurate for engineering applications.
In design, much research deals with cases where design variables are deterministic thus ignoring possible uncertainties present in manufacturing or environmental conditions. When uncertainty is considered, the design variables follow a particular distribution whose parameters are defined. Probabilistic design aims to reduce the probability of failure of a system by moving the distribution parameters of the design variables. The most popular method to estimate the probability of failure is a Monte Carlo Simulation where, using the distribution parameters, many runs are generated and the number of times the system does not meet specifications is counted. This method, however, can become time-consuming as the mechanistic model developed to model a physical system becomes increasingly complex. From structural reliability theory, the First Order Reliability Method (FORM) is an efficient method to estimate probability and efficiently moves the parameters to reduce failure probability. However, if the mechanistic model is too complex FORM becomes difficult to use. This paper presents a methodology to use approximating functions, called 'metamodels', with FORM to search for a design that minimizes the probability of failure. The method will be applied to three examples and the accuracy and speed of this metamodel-based probabilistic design method will be discussed. The speed and accuracy of three popular metamodels, the response surface model, the Radial Basis Function and the Kriging model are compared. Later, some theory will be presented on how the method can be applied to systems with a dynamic performance measure where the response lifetime is required to computer another performance measure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.