“…Our feature ranking is <PW,PL,SL,SW>. That of R. K. De et al [7] is <PL,SW,SL,PW>, and that of Jia et al [6] is <PL,PW,SL,SW>. It is thought commonly that PL and PW are most important for classification.…”
Section: Methodsmentioning
confidence: 97%
“…And further, unlike the pruning method of Jia et al [6] , setting all outputs of the nodes at L2, which are connected from the ith node in L1, to be 0.5 is seemed as be equivalent to pruning feature i f from NF. Because in terms of fuzzy reasoning, the information that a feature provides will be entirely uncertain if all memberships defined on it are 0.5.…”
Section: Feature Selection Algorithmmentioning
confidence: 97%
“…But the method and other standard normalization methods are all based on a few data's low order statistic variable and independent of learning processing, so though they can hold some invariable properties, such as translation invariability and scale invariability, etc., they may cause some invariable properties, such as rotation invariability, etc., and even probably the information which is valuable for classification target to be lost [8] . Jia et al [6] added a membership mapping layer between the input layer and the hidden layer of a radial basis function neural network (RBFNN), which maps a input feature to a vector whose elements amount to the number of classes and each element represents a membership of the input feature belongs to a class. In other words, the layer maps inputs from the feature space to the membership space.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we proposed to integrate a amended membership-based node-prune method [6] and a NF, whose second layer is for fuzzification, for implementing feature selection and the feature selection takes place after the network's learning step, which including the adapting of membership functions' parameters. So it avoids aforesaid Jia et al's and D. Chakraborty et al's problems.…”
Section: Introductionmentioning
confidence: 99%
“…And one defines membership functions parameterized with the distance between a sample and the class center [13][14] [15] , the other defines membership functions grounding on each feature's some information [1] [6] [9] . D. Chakraborty et al [11] brought forward a NF based on fuzzy rules, which is used to recognize pattern and select feature.…”
Feature selection algorithm based on artificial neural networks can be taken as a special case of architecture pruning algorithm: compute the sensitivity of network outputs against pruned features. However, these methods usually require preprocessing of data normalization, which will possibly change original data's characters that are important to classification. Neuro-fuzzy (NF) network is a fuzzy inference system (FIS) with self-study ability. We combine it with architecture pruning algorithm based on membership space and propose a new feature selection algorithm. Finally, experiments using both natural and integrated data are carried out and compared with other methods. The results approve the validity of the algorithm.
“…Our feature ranking is <PW,PL,SL,SW>. That of R. K. De et al [7] is <PL,SW,SL,PW>, and that of Jia et al [6] is <PL,PW,SL,SW>. It is thought commonly that PL and PW are most important for classification.…”
Section: Methodsmentioning
confidence: 97%
“…And further, unlike the pruning method of Jia et al [6] , setting all outputs of the nodes at L2, which are connected from the ith node in L1, to be 0.5 is seemed as be equivalent to pruning feature i f from NF. Because in terms of fuzzy reasoning, the information that a feature provides will be entirely uncertain if all memberships defined on it are 0.5.…”
Section: Feature Selection Algorithmmentioning
confidence: 97%
“…But the method and other standard normalization methods are all based on a few data's low order statistic variable and independent of learning processing, so though they can hold some invariable properties, such as translation invariability and scale invariability, etc., they may cause some invariable properties, such as rotation invariability, etc., and even probably the information which is valuable for classification target to be lost [8] . Jia et al [6] added a membership mapping layer between the input layer and the hidden layer of a radial basis function neural network (RBFNN), which maps a input feature to a vector whose elements amount to the number of classes and each element represents a membership of the input feature belongs to a class. In other words, the layer maps inputs from the feature space to the membership space.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we proposed to integrate a amended membership-based node-prune method [6] and a NF, whose second layer is for fuzzification, for implementing feature selection and the feature selection takes place after the network's learning step, which including the adapting of membership functions' parameters. So it avoids aforesaid Jia et al's and D. Chakraborty et al's problems.…”
Section: Introductionmentioning
confidence: 99%
“…And one defines membership functions parameterized with the distance between a sample and the class center [13][14] [15] , the other defines membership functions grounding on each feature's some information [1] [6] [9] . D. Chakraborty et al [11] brought forward a NF based on fuzzy rules, which is used to recognize pattern and select feature.…”
Feature selection algorithm based on artificial neural networks can be taken as a special case of architecture pruning algorithm: compute the sensitivity of network outputs against pruned features. However, these methods usually require preprocessing of data normalization, which will possibly change original data's characters that are important to classification. Neuro-fuzzy (NF) network is a fuzzy inference system (FIS) with self-study ability. We combine it with architecture pruning algorithm based on membership space and propose a new feature selection algorithm. Finally, experiments using both natural and integrated data are carried out and compared with other methods. The results approve the validity of the algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.