The paper is dedicated to solving the problem of optimal text classification in the areaof automated detection of typology of texts. In conventional approaches to topicality-based textclassification (including topic modeling), the number of clusters is to be set up by the scholar, andthe optimal number of clusters, as well as the quality of the model that designates proximity oftexts to each other, remain unresolved questions. We propose a novel approach to the automateddefinition of the optimal number of clusters that also incorporates an assessment of word proximityof texts, combined with text encoding model that is based on the system of sentence embeddings.Our approach combines Universal Sentence Encoder (USE) data pre-processing, agglomerativehierarchical clustering by Ward’s method, and the Markov stopping moment for optimal clustering.The preferred number of clusters is determined based on the “e-2” hypothesis. We set up anexperiment on two datasets of real-world labeled data: News20 and BBC. The proposed model istested against more traditional text representation methods, like bag-of-words and word2vec, to showthat it provides a much better-resulting quality than the baseline DBSCAN and OPTICS models withdifferent encoding methods. We use three quality metrics to demonstrate that clustering quality doesnot drop when the number of clusters grows. Thus, we get close to the convergence of text clusteringand text classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.