2023
DOI: 10.1109/tmm.2021.3129623
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Knowledge Distillation for Cross-Modal Hashing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…In addition, by combining an intra-modal graph and a cross-modal graph, Wu et al proposed a Modalityspecific and Cross-modal Graph Convolutional Networks (MCGCN) approach to fully explore modality-specific semantic information and modality-shared semantic information [17]. Subsequently, Su et al proposed a semi-supervised knowledge distillation for cross-modal hashing (SKDCH) algorithm, in which teacher-student optimization is employed to propagate semantic knowledge resulting in satisfactory retrieval performance [19].…”
Section: Deep Semi-supervised Cross-modal Hashingmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, by combining an intra-modal graph and a cross-modal graph, Wu et al proposed a Modalityspecific and Cross-modal Graph Convolutional Networks (MCGCN) approach to fully explore modality-specific semantic information and modality-shared semantic information [17]. Subsequently, Su et al proposed a semi-supervised knowledge distillation for cross-modal hashing (SKDCH) algorithm, in which teacher-student optimization is employed to propagate semantic knowledge resulting in satisfactory retrieval performance [19].…”
Section: Deep Semi-supervised Cross-modal Hashingmentioning
confidence: 99%
“…To reduce the cost of data annotation, the semi-supervised cross-modal hashing method is proposed by utilizing both labeled data and unlabeled data [17]- [19]. Moreover, to avoid incomplete and insufficient labels of the training data, some researchers propose the weakly-supervised cross-modal hashing method [20], [21].…”
Section: Introductionmentioning
confidence: 99%
“…constraints. Model compression is an effective method to optimize models, mainly divided into knowledge distillation ( [1], [2], [3], [4], [5]), pruning ( [6]) and quantization ( [7], [8]), etc. Through model compression, large models are effectively transformed into lightweight counterparts, facilitating their migration to mobile devices.…”
Section: Introductionmentioning
confidence: 99%
“…In such methods, the aim is to learn a function that can transform different modalities into a common feature space [4,5], where we can compare them directly. Due to the quick expansion of the data scale and the decline of data retrieval efficiency, the hashing codes are applied to cross-modal retrieval tasks [6][7][8]. This type of method maps high-dimensional features to the Hamming space by transforming data into hash binary codes and uses XOR of hash binary codes to calculate the Hamming distance.…”
Section: Introductionmentioning
confidence: 99%