Recent works have shown that combining several classifiers is an effective method to improve classification accuracy. Many ensemble approaches have been introduced such as bagging and boosting that have reduced the generalization error of different classifiers; however, these methods could not increase the performance of Nearest Neighbor (NN) classifier. In this paper, a novel weighted ensemble technique (WNNE) is presented for improving the performance of NN classifier. In fact, WNNE is a combination of several NN classifiers, which have different subsets of input feature set. The algorithm assigns a weight to each classifier, and uses a weighted vote mechanism among these classifiers to determine the output of ensemble. We evaluated the proposed method on several datasets from UCI Repository and compared with NN classifier and Random subspace method (RSM). The results show that our method outperforms these two approaches.
In recent years, tremendous amounts of data streams are generated in different application areas. The new challenges in these data need fast and online data processing, especially in classification problems. One of the most challenging problems in field of data streams that reduces the performance of traditional methods is concept change. To handle this problem, it is necessary to update the classifier system after every alteration of the concept of data. However, updating a classifier can often be a time consuming and expensive process. In this paper, an efficient method is proposed for quickly and easily updating of a fuzzy rule-based classifier by setting a weight for each rule. Then, two online procedures for online adjustment of the rule weights are proposed. The experimental results show the high performance of these methods against a non-weighted approach.
We present a new incremental fuzzy reinforcement learning algorithm to find a sub-optimal policy for infinite-horizon Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). The algorithm addresses the high computational complexity of solving large Dec-POMDPs by generating a compact fuzzy rule-base for each agent. In our method, each agent uses its own fuzzy rule-base to make the decisions. The fuzzy rules in these rule-bases are incrementally created and tuned according to experiences of the agents. Reinforcement learning is used to tune the behavior of each agent in such a way that maximum global reward is achieved. In addition, we propose a method to construct the initial rule-base for each agent using the solution of the underlying MDP. This drastically improves the performance of the algorithm in comparison with random initialization of the rule-base. We assess the performance of our proposed method using several benchmark problems in comparison with some state-of-the-art methods. Experimental results show that our algorithm achieves better or similar reward when compared with other methods. However, from the runtime point of view, our method is superior to all previous methods. Using a compact fuzzy rule-base not only decreases the amount of memory used but also significantly speeds up the learning phase.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.