Proceedings of the 27th ACM International Conference on Multimedia 2019
DOI: 10.1145/3343031.3350999
|View full text |Cite
|
Sign up to set email alerts
|

Flexible Online Multi-modal Hashing for Large-scale Multimedia Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 110 publications
(31 citation statements)
references
References 29 publications
0
31
0
Order By: Relevance
“…There are also several multi-modal methods, which handle multi-view retrieval in an online scenario by fusing multiple modalities. For instance, Dynamic Multi-View Hashing (DMVH) [46] and Flexible Online Multi-modal Hashing (FOMH) [33]. Nevertheless, these multi-modal methods still cannot carry out the cross-modal search.…”
Section: Related Workmentioning
confidence: 99%
“…There are also several multi-modal methods, which handle multi-view retrieval in an online scenario by fusing multiple modalities. For instance, Dynamic Multi-View Hashing (DMVH) [46] and Flexible Online Multi-modal Hashing (FOMH) [33]. Nevertheless, these multi-modal methods still cannot carry out the cross-modal search.…”
Section: Related Workmentioning
confidence: 99%
“…Despite their simplicity and flexibility, they need longer binary codes with reasonable performance, as compared to data-dependent ones. Many methods [6]- [8] tend to keep the neighborhood relations among the original samples mapped in a low-dimensional Hamming space. Thus, it is not strange that data-dependent counterparts become the main candidate for large-scale image retrieval [8]- [15].…”
Section: Introductionmentioning
confidence: 99%
“…Cross-modal similarity retrieval has been a popular research topic [14,18,20,25,28,28,32,33,37,40] with the objective to search the semantic similar instances from different modalities. In a typical scenario, instances in one modality, e.g., images, are retrieved given a query from another modality, e.g., text.…”
Section: Introductionmentioning
confidence: 99%
“…In a typical scenario, instances in one modality, e.g., images, are retrieved given a query from another modality, e.g., text. Hashing based cross-modal retrieval methods [14,17,18,20,25,32,33,41] largely improve the retrieval efficiency on both speed and storage by mapping large-scale and high-dimensional multi-modal media data into a common Hamming space, where semantic similar instances from different modalities can be represented by similar hash codes and the correlations among these instances can be effectively measured with their Hamming distance. Since different modality instances are heterogeneous in terms of their feature representation and distribution, it is necessary for hashing based cross-modal retrieval methods to explore appropriate methodologies that the modality gap can be bridged.…”
Section: Introductionmentioning
confidence: 99%