“…The total number of Samples (30) [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]} P2{[0, 2,4,6,8,10,12,14,16,18,20,22,24,26,28], [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]} P3{[0, 1,2,3,4,5,6,7,8,…”
Section: All Predictions Correctly Number Of Samplesmentioning
confidence: 99%
“…In order to make clustering widely available in more fields, it can be applied to large-scale group decision-making [8,9]. Existing clustering algorithms mainly include hard clustering [10,11] and fuzzy clustering [12][13][14]. The former has only two membership degrees, 0 and 1, that is, each data object is strictly divided into a certain cluster; The mem-bership of the latter can have any values within the interval [0,1], that is, a data object can be classified into multiple clusters with different membership.…”
In fuzzy clustering algorithms, the possibilistic fuzzy clustering algorithm has been widely used in many fields. However, the traditional Euclidean distance cannot measure the similarity between samples well in high-dimensional data. Moreover, if there is an overlap between clusters or a strong correlation between features, clustering accuracy will be easily affected. To overcome the above problems, a collaborative possibilistic fuzzy clustering algorithm based on information bottleneck is proposed in this paper. This algorithm retains the advantages of the original algorithm, on the one hand, using mutual information loss as the similarity measure instead of Euclidean distance, which is conducive to reducing subjective errors caused by arbitrary choices of similarity measures and improving the clustering accuracy; on the other hand, the collaborative idea is introduced into the possibilistic fuzzy clustering based on information bottleneck, which can form an accurate and complete representation of the data organization structure based on make full use of the correlation between different feature subsets for collaborative clustering. To examine the clustering performance of this algorithm, five algorithms were selected for comparison experiments on several datasets. Experimental results show that the proposed algorithm outperforms the comparison algorithms in terms of clustering accuracy and collaborative validity.
“…The total number of Samples (30) [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]} P2{[0, 2,4,6,8,10,12,14,16,18,20,22,24,26,28], [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]} P3{[0, 1,2,3,4,5,6,7,8,…”
Section: All Predictions Correctly Number Of Samplesmentioning
confidence: 99%
“…In order to make clustering widely available in more fields, it can be applied to large-scale group decision-making [8,9]. Existing clustering algorithms mainly include hard clustering [10,11] and fuzzy clustering [12][13][14]. The former has only two membership degrees, 0 and 1, that is, each data object is strictly divided into a certain cluster; The mem-bership of the latter can have any values within the interval [0,1], that is, a data object can be classified into multiple clusters with different membership.…”
In fuzzy clustering algorithms, the possibilistic fuzzy clustering algorithm has been widely used in many fields. However, the traditional Euclidean distance cannot measure the similarity between samples well in high-dimensional data. Moreover, if there is an overlap between clusters or a strong correlation between features, clustering accuracy will be easily affected. To overcome the above problems, a collaborative possibilistic fuzzy clustering algorithm based on information bottleneck is proposed in this paper. This algorithm retains the advantages of the original algorithm, on the one hand, using mutual information loss as the similarity measure instead of Euclidean distance, which is conducive to reducing subjective errors caused by arbitrary choices of similarity measures and improving the clustering accuracy; on the other hand, the collaborative idea is introduced into the possibilistic fuzzy clustering based on information bottleneck, which can form an accurate and complete representation of the data organization structure based on make full use of the correlation between different feature subsets for collaborative clustering. To examine the clustering performance of this algorithm, five algorithms were selected for comparison experiments on several datasets. Experimental results show that the proposed algorithm outperforms the comparison algorithms in terms of clustering accuracy and collaborative validity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.