One of the key applications of Natural Language Processing (NLP) is to automatically extract topics from large volumes of text. Latent Dirichlet Allocation (LDA) technique is commonly used to extract topics based on word frequency from the pre-processed documents. A major issue of LDA is that the quality of topics extracted are poor if the document do not coherently discuss a single topic. However, Gibbs sampling uses word by word basis which changes the topic assignment of one word and can be used on documents having different topics. Hence, this paper proposed a hybrid based semantic similarity measure for topic modelling using LDA and Gibbs sampling to exploit the strength of automatic text extraction and improve coherence score. Unstructured dataset was obtained from a public repository to validate the performance of the proposed model. The evaluation carried out shows that the proposed LDA-Gibbs had a coherence score of 0. 52650 as against LDA coherence score 0.46504. The proposed multi-level model provides better quality of topics extracted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.