Topic modeling, or identifying the set of topics that occur in a collection of articles, is one of the primary objectives of text mining. Typically, a text corpus is represented as a words-bydocuments matrix, X, where x ij , encodes the i-th word importance score in the j-th document using the Term Frequency-Inverse Document Frequency (TF-IDF) representation. Non-negative Matrix Factorization (NMF) can then be used in order to extract and model the topics in the corpus. NMF approximates X as a product of two low-rank non-negative factors: W , which represents the topics, and H, which specifies the coordinates of each of the documents in the topic-space. Semantic-assisted NMF (SeNMF) improves upon NMF by incorporating word-context and semantic correlations into the model, via adding a regularization term in the NMF minimization based on a Shifted Positive Pointwise Mutual Information (SPPMI) matrix M , which describes the mutual information between words and their context. In this paper, we consider a semantic-assisted NMF topics model, which we call SeNMFk, based on Kullback-Leibler divergence and integrated with a method for determining the number of latent topics. Determining the correct number of topics is extremely important: underestimating the number of topics results in a loss of information, i.e., omission of topics, underfitting, while overestimating leads to noisy and unexplainable topics and overfitting. Our method SeNMFk involves creating a random ensemble of pairs of matrices whose mean is equal to the initial TF-IDF matrix, X, and SPPMI matrix, M , respectively, and jointly factorizing each of these pairs with different number of topics to acquire sets of latent topics that are stable to noise. We demonstrate the performance of our method by identifying the number of topics in several benchmark text corpora, when compared to other state-of-the-art techniques. We also show that the number of document classes in the input text corpus may differ from the number of the extracted latent topics, but these classes can be retrieved by clustering the column-vectors of the matrix H. We demonstrate that our unsupervised method, SeNMFk, not only determines the correct number of topics, but also extracts topics with a high coherence and accurately classifies the documents of the corpus.