2023
DOI: 10.1007/s10489-022-04410-6
|View full text |Cite
|
Sign up to set email alerts
|

Swin transformer-based supervised hashing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…Where the information is available, we also show the change in accuracy and parameter count these models achieve in comparison to the highest accuracy nontransformer structure described in the original works. By adopting the SwinV2 transformer [216], the authors of [264] demonstrate significant improvements to event-based object tracking. [261] offers many comparisons to other works by accuracy and parameter count, however, the authors neglect to include both metrics for any one task.…”
Section: E Need Of Transformersmentioning
confidence: 99%
“…Where the information is available, we also show the change in accuracy and parameter count these models achieve in comparison to the highest accuracy nontransformer structure described in the original works. By adopting the SwinV2 transformer [216], the authors of [264] demonstrate significant improvements to event-based object tracking. [261] offers many comparisons to other works by accuracy and parameter count, however, the authors neglect to include both metrics for any one task.…”
Section: E Need Of Transformersmentioning
confidence: 99%
“…This quantization error can be measured either by Euclidean or Angle distance, as illustrated in Figure 3. Precisely, we measure the quantization error of the database by Euclidean distance (19), squared Euclidean distance (20), and the angle θ znhn between the continuous hash code z n and the discrete hash code h n (21).…”
Section: ) Quantization Errormentioning
confidence: 99%
“…Learning to hash can also leverage deep learning to become deep hashing, which shows much more performance than conventional methods. With the rise of deep learning [9], many powerful yet efficient neuron network architectures have been proposed [10], [11], [12], [13], [14] and rapidly incorporated into deep hashing [15], [16], [17], [18], [19]. As a result, deep hashing can encode compact binary codes representing complex, high-level features.…”
Section: Introductionmentioning
confidence: 99%
“…It employs a hierarchical Windows mechanism as a feature extractor [27], which can support higher resolution image processing tasks [28] and deeper deep learning network architectures. Compared with the traditional vision transformer (VIT) network, Swin transformer performs better in tasks such as image perception and classification [29,30], so it has great potential to be applied to the fault diagnosis of motor equipment.…”
Section: Introductionmentioning
confidence: 99%