IJPE 2018
DOI: 10.23940/ijpe.18.04.p21.795804
|View full text |Cite
|
Sign up to set email alerts
|

Two-Stage Semantic Matching for Cross-Media Retrieval

Abstract: With the development of information technology, there exists a large amount of multi-media data in our lives; the data is heterogeneous with low-level features while consistent with semantic information. Traditional mono-media retrieval can't cross the heterogeneous gap of multi-media data, and cross-media retrieval is arousing many researchers' interests. In this paper, we propose a two-stage semantic matching for cross-media retrieval based on support vector machines (called TSMCR). Our approach uses a combi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Configuration e-commerce data description text features compared with ordinary text, IED configuration data description text usually involves e-commerce proper nouns [ 14 ]. Misdivision of words is easy to occur in the word segmentation stage, which leads to misclustering of word vectors by the language model [ 15 ]. Therefore, the article is in the classification package.…”
Section: E-commerce Text Mining Under Big Datamentioning
confidence: 99%
“…Configuration e-commerce data description text features compared with ordinary text, IED configuration data description text usually involves e-commerce proper nouns [ 14 ]. Misdivision of words is easy to occur in the word segmentation stage, which leads to misclustering of word vectors by the language model [ 15 ]. Therefore, the article is in the classification package.…”
Section: E-commerce Text Mining Under Big Datamentioning
confidence: 99%
“…Use cost value to represent the difference between two items, and is the deviation item. By iteratively changing the word vectors of all words, the cost value is the smallest in the entire corpus, that is, the optimal word vector of all words in the corpus is obtained so that the word vector of the word is calculated through the context information [31]. The dataset contains a large amount of English text, and the word vector obtained by pretraining contains more accurate context information.…”
Section: Wireless Communications and Mobile Computingmentioning
confidence: 99%
“…With the developing of compressed sensing, sparse representation represents a sample (a test sample) e.g. an image or a text using an overcomplete dictionary (the training samples), and the representation is linear and naturally sparse [23][24][25][26]. The total training set is defined as the overcomplete dictionary A of k classes:…”
Section: Sparse Representation Classifiermentioning
confidence: 99%