2017 International Conference on System Science and Engineering (ICSSE) 2017
DOI: 10.1109/icsse.2017.8030943
|View full text |Cite
|
Sign up to set email alerts
|

An adaptive Latent Semantic Analysis for text mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…Learning topics or patterns from large corpus has drawn increasing attentions in data mining and related areas as more and more electronic document archives are available on the Internet. Recent researches in machine learning and text mining have developed many classical techniques, e.g., Latent Semantic Analysis (LSA) [1], [2], Probabilistic Latent Semantic Indexing (pLSI) [3], Latent Dirichlet Analysis (LDA) [4], and Topic Word Embedding [5] for finding patterns of words in large document collections. Among all those techniques, hierarchical probabilistic models, also known as ''topic model'', have become a widely used approach for exploratory and predictive analysis of text [6]- [10].…”
Section: Introductionmentioning
confidence: 99%
“…Learning topics or patterns from large corpus has drawn increasing attentions in data mining and related areas as more and more electronic document archives are available on the Internet. Recent researches in machine learning and text mining have developed many classical techniques, e.g., Latent Semantic Analysis (LSA) [1], [2], Probabilistic Latent Semantic Indexing (pLSI) [3], Latent Dirichlet Analysis (LDA) [4], and Topic Word Embedding [5] for finding patterns of words in large document collections. Among all those techniques, hierarchical probabilistic models, also known as ''topic model'', have become a widely used approach for exploratory and predictive analysis of text [6]- [10].…”
Section: Introductionmentioning
confidence: 99%
“…However, in the traditional space vector model, there is no semantic relation between the words of vectors and each word needs to be vectorized, and the cosine calculation is performed to consume much time of processing of the Central Processing Unit. Based on the semantic similarity calculation method which is divided into the corpus based methods, the methods based on semantic dictionary and the method based on statistical language model [3][4][5][6][7][8], these methods are more complicated and are easily limited by the size of the corpus. The results of similarity calculation are more easily influenced by the noise of the training data.…”
Section: Introductionmentioning
confidence: 99%