2014
DOI: 10.14313/jamris_2-2014/18
|View full text |Cite
|
Sign up to set email alerts
|

A new heuristic possibilistic clustering algorithm for feature selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Concerning the simpli ication of the kNN.avg1 algorithm which we have considered, our experiments seem to show statistically signi icant reduction of the quality of the classi ication due to its use for most of the parameters settings, i.e., when the pairs of algorithms (4,6), (10,11), (14,15) and (18,19) in Table 6 are compared. However, for the setting where both techniques perform the best, i.e., when the parameter k is dynamically tuned (pair (3,5)), there is no statistically signi icant difference between the original kNN.avg1 technique and its simpli ied version.…”
Section: Tab 6 the Averaged Results Of 200 Runs Of The Compared Algmentioning
confidence: 87%
See 1 more Smart Citation
“…Concerning the simpli ication of the kNN.avg1 algorithm which we have considered, our experiments seem to show statistically signi icant reduction of the quality of the classi ication due to its use for most of the parameters settings, i.e., when the pairs of algorithms (4,6), (10,11), (14,15) and (18,19) in Table 6 are compared. However, for the setting where both techniques perform the best, i.e., when the parameter k is dynamically tuned (pair (3,5)), there is no statistically signi icant difference between the original kNN.avg1 technique and its simpli ied version.…”
Section: Tab 6 the Averaged Results Of 200 Runs Of The Compared Algmentioning
confidence: 87%
“…Then we propose to model the cases and proceed with the classi ication of the documents in the frame-work of the hidden Markov models and sequence mining [22], using the concepts of the computational intelligence [23], or employing the support vector machines [24]. We also pursued other paths, including semantic representation of documents, inding a parallel of the MTC with text segmentation, studying the asymmetry of similarity [13,14], devising new cluster analysis techniques [11] or investigating the applicability of the concepts related to the coreference detection in data schemas [17].…”
Section: Related Workmentioning
confidence: 99%
“…Second, it involves the machine learning algorithm (Decision tree) to evaluate pairs of features. In [32,33], a wellknown wrapper approach is presented, Recursive Feature Elimination using Random Forest (RFE). RFE performs feature selection recursively.…”
Section: Related Workmentioning
confidence: 99%
“…However, for the sake of consistency, our goodness criteria for the outputs are those of PCA and FA. In this respect fuzzy mountain clustering [5,32] and fuzzy c-means clustering methods [2,3,6,8,12,15,16,17,20,25,26] are analogous to PCA and FA, respectively.…”
Section: Soft Computing and Dimensionality Reductionmentioning
confidence: 99%