Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/115
|View full text |Cite
|
Sign up to set email alerts
|

Deep Polarized Network for Supervised Learning of Accurate Binary Hashing Codes

Abstract: This paper proposes a novel deep polarized network (DPN) for learning to hash, in which each channel in the network outputs is pushed far away from zero by employing a differentiable bit-wise hinge-like loss which is dubbed as polarization loss. Reformulated within a generic Hamming Distance Metric Learning framework [Norouzi et al., 2012], the proposed polarization loss bypasses the requirement to prepare pairwise labels for (dis-)similar items and, yet, the proposed loss strictly bounds from above … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 80 publications
(64 citation statements)
references
References 25 publications
0
61
0
Order By: Relevance
“…From our experiment, manipulating hyperparameters of the algorithm and batch size of input data are able to generate distorted images to fool the classification model which could lead other potential threats. Luckily, there are some existing works [16][17][18] which suggest different encryption methods in federated learning as a defence.…”
Section: Discussionmentioning
confidence: 99%
“…From our experiment, manipulating hyperparameters of the algorithm and batch size of input data are able to generate distorted images to fool the classification model which could lead other potential threats. Luckily, there are some existing works [16][17][18] which suggest different encryption methods in federated learning as a defence.…”
Section: Discussionmentioning
confidence: 99%
“…[28] introduced a deep incremental hash framework that alleviates the burden of retraining the model when the gallery image set increases. [22] further introduces a deep polarized loss for the hamming code generation, obviating the need for an additional quantization loss. Despite the success, these methods are not tailored for the large-scale vehicle re-identification scenario and could only achieve sub-optimal performances, as is illustrated before.…”
Section: A Hash For Content Retrievalmentioning
confidence: 99%
“…For the supervised setting, we further incorporate DSH [48] which was among the first works attempting to learn discrete hamming hash codes with deep convolutional neural networks. In addition, we incorporate other recent methods for detailed comparisons and analyses including DHN [24], HashNet [11], DCH, [13] DPN [22]. Note that we adopt original source codes for SH, ITQ, DCH and DHN.…”
Section: Comparisons With State-of-the-artsmentioning
confidence: 99%
“…Various learning objectives are developed to learn hash codes using a training dataset. The objective functions include 1) task learning objective which can be further categorized into pointwise [48,53,39,40,11,49], pairwise [4,24,22], triplet-wise [44,31], listwise [51] and unsupervised [26,13]; 2) quantization error minimization such as the loss designed to minimize the p-norm (usually p = 2) between continuous codes and hash codes; 3) code balancing [26,39]. We refer readers to learning to hashing surveys [43,42,10] for more detailed review.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, deep hashing methods [47,22] have shown great improvements over conventional hashing methods [45,14,15,21,35,36,18]. Furthermore, deep hashing methods can be grouped by how the similarity of the learned hashing codes are measured, namely pointwise [48,53,39,11,49], pairwise [24,22,4,3], triplet-wise [44,31], or listwise [51]. Among them, pointwise methods have a O(N ) computational complexity, whilst the complexity of the others are of at least O(N 2 ) for N data points.…”
Section: Introductionmentioning
confidence: 99%