2015
DOI: 10.1016/j.imavis.2015.02.003
|View full text |Cite
|
Sign up to set email alerts
|

Incremental probabilistic Latent Semantic Analysis for video retrieval

Abstract: Recent research trends in content-based video retrieval have shown topic models as an effective tool to deal with the semantic gap challenge. In this scenario, this work has a dual target: (1) it is aimed at studying how the use of different topic models (pLSA, LDA and FSTM) affects video retrieval performance; (2) a novel incremental topic model (IpLSA) is presented in order to cope with incremental scenarios in an effective and efficient way.A comprehensive comparison among these four topic models using two … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
10

Relationship

4
6

Authors

Journals

citations
Cited by 23 publications
(6 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…LDA potentially overcomes these drawbacks by using two Dirichlet distributions, one to model documents θ ∼ Dir(α) and another to model topics p(w|t,β) ∼ Dir(β). iterating over the document collection which results in LDA requiring relatively dense distributions to obtain a good hyper-parameter estimation [30]. Even authors in [31] Second, the proposed approach does not consider any kind of prior distribution but only the own data, which eventually simplifies the complexity of the model when compared to other Bayesian approaches that assume prior distributions with some hyperparameters [41], [42].…”
Section: Background On Topic Modelsmentioning
confidence: 99%
“…LDA potentially overcomes these drawbacks by using two Dirichlet distributions, one to model documents θ ∼ Dir(α) and another to model topics p(w|t,β) ∼ Dir(β). iterating over the document collection which results in LDA requiring relatively dense distributions to obtain a good hyper-parameter estimation [30]. Even authors in [31] Second, the proposed approach does not consider any kind of prior distribution but only the own data, which eventually simplifies the complexity of the model when compared to other Bayesian approaches that assume prior distributions with some hyperparameters [41], [42].…”
Section: Background On Topic Modelsmentioning
confidence: 99%
“…However, this generative process is usually called ill-defined because documents set topic mixtures and simultaneously topics generate documents, thus there is not a natural way to infer previously unseen documents [8]. Additionally, the number of pLSA parameters grows linearly with the number of training documents which makes this model particularly memory demanding and susceptible to over-fitting [13].…”
Section: Background On Topic Modelsmentioning
confidence: 99%
“…According to the standard pLSA model complexity [38], sMpLSA-tst cost is , that is, the computational burden of the regular pLSA model. However, it is important to highlight that the sMpLSAtst model only has a single parameter to be estimated, i.e.…”
Section: Computational Complexitymentioning
confidence: 99%