Oil and gas industries need secure and cost-effective alarm systems to meet safety requirements and to avoid problems that lead to plant shutdowns, production losses, accidents and associated lawsuit costs. Although most current distributed control systems (DCS) collect and archive alarm event logs, the extensive quantity and complexity of such data make identification of the problem a very labour-intensive and time-consuming task. This paper proposes a data mining approach that is designed to support alarm rationalization by discovering correlated sets of alarm tags. The proposed approach was initially evaluated using simulation data from a Vinyl Acetate model. Experimental results show that our novel approach, using an event segmentation and data filtering strategy based on a cross-effect test is significant because of its practicality. It has the potential to perform meaningful and efficient extraction of alarm patterns from a sequence of alarm events
SUMMARYAlthough a number of weaknesses of symbolic execution, when used for software testing, have been highlighted in the literature, the recent resurgence of strongly-typed languages has created opportunities for re-examining symbolic execution to determine whether these shortfalls can be overcome. This paper discusses symbolic execution in general and makes two contributions: (a) overcoming one of the key problems, analysing programs with indexed arrays; and (b) describing the incorporation of a symbolic execution module for test case generation into an integrated testing tool. For methods which index arrays, a new approach determines all the possible values of each array index, allowing the generation of equivalence classes for every possible combination of array element aliases. An incremental simplification approach, which converts path expressions to canonical forms in order to identify infeasible paths at the earliest opportunity and thus reduce the analysis time, is also described. Symbolic execution is most effective when included in an integrated test and analysis environment: a component test bench was built with a symbolic execution module integrated into it, providing a toolbox of software component test and code analysis methods aimed at programmers at all levels.
Complex industrial processes such as nuclear power plants, chemical plants and petroleum refineries are usually equipped with alarm systems capable of monitoring thousands of process variables and generating tens of thousands of alarms which are used as mechanisms for alerting operators to take actions to alleviate or prevent an abnormal situation. Overalarming and a lack of configuration management practices have often led to the degradation of these alarm systems, resulting in operational problems such as the Three-Mile Island accident. In order to aid alarm rationalization, this paper proposed an approach that incorporates a context-based segmentation approach with a data mining technique to find a set of correlated alarms from historical alarm event logs. Before the set of extracted results from this automated technique are used they can be evaluated by a process engineer with process understanding. The proposed approach is evaluated initially using simulation data from a Vinyl Acetate model. The approach is cost effective as any manual alarm analysis of the event logs for identifying primary and consequential alarms could be very time and labour intensive.
For many years lenders have been using traditional statistical techniques such as logistic regression and discriminant analysis to more precisely distinguish between creditworthy customers who are granted loans and non-creditworthy customers who are denied loans. More recently new machine learning techniques such as neural networks, decision trees, and support vector machines have been successfully employed to classify loan applicants into those who are likely to pay a loan off or default upon a loan. Accurate classification is beneficial to lenders in terms of increased financial profits or reduced losses and to loan applicants who can avoid overcommitment. This paper examines a historical data set from consumer loans issued by a German bank to individuals whom the bank considered to be qualified customers. The data set consists of the financial attributes of each customer and includes a mixture of loans that the customers paid off or defaulted upon. The paper examines and compares the classification accuracy rates of three decision tree techniques as well as analyzes their ability to generate easy to understand rules.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.