Several heuristic methods have been suggested for improving the generalization capability in neural network learning, most of which are concerned with a single-objective (SO) learning tasks. In this work, we discuss generalization improvement in multi-objective learning (MO). As a case study, we investigate the generation of neural network classifiers based on the receiver operating characteristics (ROC) analysis using an evolutionary multi-objective optimization algorithm. We show on a few benchmark problems that for MO learning such as the ROC based classification, the generalization ability can be more efficiently improved within a multi-objective framework than within a single-objective one.
Abstract-Handling catastrophic forgetting is an interesting and challenging topic in modeling the memory mechanisms of the human brain using machine learning models. From a more general point of view, catastrophic forgetting reflects the stability-plasticity dilemma, which is one of the several dilemmas to be addressed in learning systems: to retain the stored memory while learning new information. Different to the existing approaches, we introduce a Pareto-optimality based multi-objective learning framework for alleviating catastrophic learning. Compared to the single-objective learning methods, multi-objective evolutionary learning with the help of pseudorehearsal is shown to be more promising in dealing with the stability-plasticity dilemma.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.