As a commonly used technique in data preprocessing, feature selection selects a subset of informative attributes or variables to build models describing data. By removing redundant and irrelevant or noise features, feature selection can improve the predictive accuracy and the comprehensibility of the predictors or classifiers. Many feature selection algorithms with different selection criteria has been introduced by researchers. However, it is discovered that no single criterion is best for all applications. In this paper, we propose a framework based on a genetic algorithm (GA) for feature subset selection that combines various existing feature selection methods. The advantages of this approach include the ability to accommodate multiple feature selection criteria and find small subsets of features that perform well for a particular inductive learning algorithm of interest to build the classifier. We conducted experiments using three data sets and three existing feature selection methods. The experimental results demonstrate that our approach is a robust and effective approach to find subsets of features with higher classification accuracy and/or smaller size compared to each individual feature selection algorithm.
The Channel Assignment Problem is an NP-complete problem to assign a minimum number of channels under certain constraints to requested calls in a cellular radio system. Examples of the many approaches to solve this problem include using neural-networks, simulated annealing, graph colouring, genetic algorithms, and heuristic searches. We present a new heuristic algorithm that consists of three stages: 1) determine-lower-bound cell regular interval assignment; 2) greedy region assignment; and 3) genetic algorithm assignment. Through simulation, we show that our heuristic algorithm achieves lower bound solutions for 11 of the 13 instances of the well known Philadelphia benchmark problem. Our algorithm also has the advantage of being able to find optimum solutions faster than existing approaches that use neural networks.
A number of models allow processors to reconfigure their local connections to create and alter various bus configurations. This reconfiguration enables development of fast algorithms for fundamental problems, many in constant time. We investigate the ability of such models by relating time and processor bounded complexity classes defined for these models to each other and to those of more traditional models. In this work, (1) we tighten the relations for some of the models, placing them more precisely in relation to each other than was previously known (particularly, the Linear Reconfigurable Network and Directed Reconfigurable Network relative to circuit-defined classes), and (2) we include models (Fusing-Restricted Reconfigurable Mesh and Pipelined Reconfigurable Mesh) not previously considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.