2019
DOI: 10.1109/access.2019.2924996
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Fast Supervised Discrete Hashing

Abstract: Hash-based learning has attracted considerable attention due to its fast retrieval speed and low computational cost for the large-scale database. Compared with unsupervised hashing, supervised hashing achieves higher retrieval accuracy generally by leveraging supervised information. Most existing supervised hashing methods, such as supervised discrete hashing (SDH) and fast SDH (FSDH), are concerned more with the centralized setting. SDH regresses the hash code to its corresponding label, rather FSDH regressin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…The experiment is evaluated the average precision and mean average precision (MAP) of each algorithm on dataset CIFAR-10. On dataset Wiki 2 , we compare the performance of three distributed supervision algorithms DSDHR, DSDH [36], and DFSDH [40]. DSDH regresses the hash code to the label information, while DFSDH regresses the label information to the hash code.…”
Section: B Analysis Of Computational Complexitymentioning
confidence: 99%
“…The experiment is evaluated the average precision and mean average precision (MAP) of each algorithm on dataset CIFAR-10. On dataset Wiki 2 , we compare the performance of three distributed supervision algorithms DSDHR, DSDH [36], and DFSDH [40]. DSDH regresses the hash code to the label information, while DFSDH regresses the label information to the hash code.…”
Section: B Analysis Of Computational Complexitymentioning
confidence: 99%
“…where W (l) ∈ R d×k denote the top PCA [40] vectors corresponding to the l-th individual, and k is the length of short codes, and tr() is the matrix trace, and I k is a tby-t identity matrix. The final projection matrix W = W (1) , W (2) , . .…”
Section: Bootstrapmentioning
confidence: 99%
“…In the machine learning, nearest neighbor search [1], [2] is one of the widely used technologies, and it has been applied in many areas, e.g. , data retrieval [3], [4], scene text recognition [5] and computer vision [6].…”
Section: Introductionmentioning
confidence: 99%
“…Note that there exist a few distributed hashing algorithms [50,89,105,60] which is optimized by Alternating Direction Method of Multipliers (ADMM) [7]. To update the local model with ADMM, each local machine communicates with all adjacent machines to aggregate their latest local models.…”
Section: Approaches and Contributionsmentioning
confidence: 99%
“…By far, a few distributed hashing algorithms [50,89,105,60] The first distributed hashing algorithm was proposed by Leng et al [50], which minimizes the quantization error splitting with consensus constraints by ADMM. The followup works [60,89,105] were proposed to split the original objective function in [56,72] with consensus constraints and optimize the new loss function by ADMM.…”
Section: Distributed Hashingmentioning
confidence: 99%