2022
DOI: 10.1016/j.ins.2021.11.087
|View full text |Cite
|
Sign up to set email alerts
|

A Web service clustering method based on topic enhanced Gibbs sampling algorithm for the Dirichlet Multinomial Mixture model and service collaboration graph

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 32 publications
0
9
0
Order By: Relevance
“…In the preliminary work, it was verified through experiments that K-means++ algorithm is better than AGNES, BIRCH, GMM and other common clustering algorithms in Web services clustering under the same set of SFVs [25]. K-means++ algorithm shows excellent clustering performance with low requirements for computing resources.…”
Section: Web Service Clustering Algorithm Based On K-means++mentioning
confidence: 91%
See 1 more Smart Citation
“…In the preliminary work, it was verified through experiments that K-means++ algorithm is better than AGNES, BIRCH, GMM and other common clustering algorithms in Web services clustering under the same set of SFVs [25]. K-means++ algorithm shows excellent clustering performance with low requirements for computing resources.…”
Section: Web Service Clustering Algorithm Based On K-means++mentioning
confidence: 91%
“…Kmeans was employed to cluster services based on the service embeddings. Hu et al [25] proposed a service clustering method combining service function similarity and collaboration similarity. They used the improved GSDMM model to generate high-quality SFV to enhance the accuracy of service function similarity analysis.…”
Section: Related Workmentioning
confidence: 99%
“…This differs from LDA which assumes that a document can have multiple topics in the beginning. Hu et al [ 56 ] showed that GSDMM has better performance than related methods for Web service clustering. The generative process for GSDMM can be expanded for the whole corpus as follows:…”
Section: Methodsmentioning
confidence: 99%
“…The output result corresponding to the ith node L (K) The loss at Kth time step rely on graph structure information and effectively captures correlations between node features [19]. However, GAT solely considers the connectivity of edges and does not fully leverage edge features.…”
Section: Incorporating Co-occurrence For Recognizing Disease Symptom ...mentioning
confidence: 99%