Early detection of breast cancer is the key to improve survival rate. Thermogram is a promising front-line screening tool as it is able to warn women of breast cancer up to 10 years in advance. However, analysis and interpretation of thermogram are heavily dependent on the analysts, which may be inconsistent and error-prone. In order to boost the accuracy of preliminary screening using thermogram without incurring additional financial burden, Complementary Learning Fuzzy Neural Network (CLFNN), FALCON-AART is proposed as the Computer-Assisted Intervention (CAI) tool for thermogram analysis. CLFNN is a neuroscience-inspired technique that provides intuitive fuzzy rules, human-like reasoning, and good classification performance. Confluence of thermogram and CLFNN offers a promising tool for fighting breast cancer.
There are two important issues in neuro-fuzzy modeling: (1) interpretability--the ability to describe the behavior of the system in an interpretable way--and (2) accuracy--the ability to approximate the outcome of the system accurately. As these two objectives usually exert contradictory requirements on the neuro-fuzzy model, certain compromise has to be undertaken. This letter proposes a novel rule reduction algorithm, namely, Hebb rule reduction, and an iterative tuning process to balance interpretability and accuracy. The Hebb rule reduction algorithm uses Hebbian ordering, which represents the degree of coverage of the samples by the rule, as an importance measure of each rule to merge the membership functions and hence reduces the number of the rules. Similar membership functions (MFs) are merged by a specified similarity measure in an order of Hebbian importance, and the resultant equivalent rules are deleted from the rule base. The rule with a higher Hebbian importance will be retained among a set of rules. The MFs are tuned through the least mean square (LMS) algorithm to reduce the modeling error. The tuning of the MFs and the reduction of the rules proceed iteratively to achieve a balance between interpretability and accuracy. Three published data sets by Nakanishi (Nakanishi, Turksen, & Sugeno, 1993), the Pat synthetic data set (Pal, Mitra, & Mitra, 2003), and the traffic flow density prediction data set are used as benchmarks to demonstrate the effectiveness of the proposed method. Good interpretability, as well as high modeling accuracy, are derivable simultaneously and are suitably benchmarked against other well-established neuro-fuzzy models.
Genetic complementary learning (GCL) is a biological brain-inspired learning system based on human pattern recognition, and genes selection process. It is a confluence of the hippocampal complementary learning and the evolutionary genetic algorithm. With genetic algorithm providing the possibility of optimal solution, and complementary learning providing the efficient pattern recognition, GCL may offer superior performance. In contrast to other computational finance tools such as neural network and statistical methods, GCL provides greater interpretability and it does not rely on the assumption of the underlying data distribution. It is an evolving and autonomous system that avoids the time-consuming process of manual rule construction or modeling. This is highly favorable especially in financial world where data is ever changing, and requires frequent update. The feasibility of GCL as stock market predictor, and bank failure early warning system is investigated. The experimental results show that GCL is a competent computational finance tools for stock market prediction and bank failure early warning system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.