2015
DOI: 10.1016/j.ins.2014.12.003
|View full text |Cite
|
Sign up to set email alerts
|

Dimensionality reduction by feature clustering for regression problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
8
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(8 citation statements)
references
References 39 publications
0
8
0
Order By: Relevance
“…3) PALM features a fully open network structure where its rules can be automatically generated, merged and updated on demand in the one-pass learning fashion. The rule generation process is based on the self-constructing clustering approach [23], [24] checking coherence of input and output space. The rule merging scenario is driven by the similarity analysis via the distance and orientation of two hyperplanes.…”
Section: Introductionmentioning
confidence: 99%
“…3) PALM features a fully open network structure where its rules can be automatically generated, merged and updated on demand in the one-pass learning fashion. The rule generation process is based on the self-constructing clustering approach [23], [24] checking coherence of input and output space. The rule merging scenario is driven by the similarity analysis via the distance and orientation of two hyperplanes.…”
Section: Introductionmentioning
confidence: 99%
“…Intuitively, similar instances are to be grouped in a cluster and different instances grouped in different clusters. Clustering has been widely utilized in a variety of applications, such as revealing the internal structure of the data [3], deriving segmentation of the data [4,5], preprocessing the data for other artificial intelligence (AI) techniques [6,7], business intelligence [1,8], and knowledge discovery in data [9,10]. For example, in electronic text processing [11][12][13], clustering is used to reduce the dimensionality in order to improve the efficiency of the processing, or to ease the curse of dimensionality encountered for high-dimensional problems.…”
Section: Introductionmentioning
confidence: 99%
“…• The feeding order of x 9 , x 10 , x 4 , x 7 , x 5 , x 11 , x 2 , x 12 , x 3 , x 1 , x 6 , x 8 . After performing SCC in the first iteration, there are 5 clusters: C 1 , C 2 , C 3 , C 4 , and C 5 , as shown in Figure A2a Instances x 1 , x 5 , x 8 , and x 12 are assigned to C 1 , x 2 and x 10 are assigned to C 2 , x 3 and x 6 are assigned to C 3 , x 4 , x 7 , and x 11 are assigned to C 4 , and x 9 is assigned to C 5 . Iterations 2 and 3 are performed subsequently.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…In this step, Correlation Coefficient (CC) is used to measure the redundancy level between features because it can be directly applied to numerical data. Although CC can only measure the linear relationship between variables, it has been shown to be effective in many feature selection methods [99,246]. The CC measure gives a value between -1 and 1 whose absolute value represents the correlation level between two features.…”
Section: Redundancy Based Feature Clustering: Rfcmentioning
confidence: 99%