2022
DOI: 10.1109/lwc.2021.3139166
|View full text |Cite
|
Sign up to set email alerts
|

Codebook Training for Trellis-Based Hierarchical Grassmannian Classification

Abstract: We consider classification of points on a complexvalued Grassmann manifold of m-dimensional subspaces within the n-dimensional complex Euclidean space. We introduce a trellis-based hierarchical classification network, which is based on an orthogonal product decomposition of the orthogonal basis representing the m-dimensional subspace. Exploiting the similarity of the proposed trellis classifier with a neural network, we propose stochastic gradient-based training techniques. We apply the proposed methods to two… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…where d p ( W, R) is the projection distortion (25), and W is a codebook of N t × K matrices. As we are separately interested in the K columns, Grassmannian codebooks [23], [28], [31], [32] of K-dimensional subspaces are not of interest. Also, due to the modular design in mind, the phases of the K columns are irrelevant.…”
Section: A Eigenvector Quantizationmentioning
confidence: 99%
See 4 more Smart Citations
“…where d p ( W, R) is the projection distortion (25), and W is a codebook of N t × K matrices. As we are separately interested in the K columns, Grassmannian codebooks [23], [28], [31], [32] of K-dimensional subspaces are not of interest. Also, due to the modular design in mind, the phases of the K columns are irrelevant.…”
Section: A Eigenvector Quantizationmentioning
confidence: 99%
“…The complexity of finding a codeword using (32) is the Kth root of the complexity of finding a codeword in matrix codebook W with the same granularity. Columnwise Grassmann line quantization has the added benefit that it operates natively on the flag manifold, as opposed to a matrix Grassmann [23], [28], [31], [32] or Stiefel [30] quantization. However, if V is overcomplete, there is a high probability that VV H = I Nt .…”
Section: A Eigenvector Quantizationmentioning
confidence: 99%
See 3 more Smart Citations