2016
DOI: 10.1109/tit.2016.2570238
|View full text |Cite
|
Sign up to set email alerts
|

On the Similarities Between Generalized Rank and Hamming Weights and Their Applications to Network Coding

Abstract: Rank weights and generalized rank weights have been proven to characterize error and erasure correction, and information leakage in linear network coding, in the same way as Hamming weights and generalized Hamming weights describe classical error and erasure correction, and information leakage in wire-tap channels of type II and code-based secret sharing. Although many similarities between both cases have been established and proven in the literature, many other known results in the Hamming case, such as bound… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
81
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 46 publications
(84 citation statements)
references
References 37 publications
3
81
0
Order By: Relevance
“…11] are essentially equivalent. However, we adopt the approach in [35], which gives one restriction and one shortening for each support rather than each matrix, since different generator matrices of the same support give equivalent codes. In this way, the properties of the restricted and shortened codes are related to properties of the corresponding supports.…”
Section: Restriction Shortening Change Of Bases and Pre-shorteningmentioning
confidence: 99%
See 2 more Smart Citations
“…11] are essentially equivalent. However, we adopt the approach in [35], which gives one restriction and one shortening for each support rather than each matrix, since different generator matrices of the same support give equivalent codes. In this way, the properties of the restricted and shortened codes are related to properties of the corresponding supports.…”
Section: Restriction Shortening Change Of Bases and Pre-shorteningmentioning
confidence: 99%
“…This result follows directly from [ Proof. With the definitions and results from Sections 2 and 3, the proof can be translated mutatis mutandis from those in [35,Th. 3] or [43,Prop.…”
Section: Generalized Sum-rank Weightsmentioning
confidence: 99%
See 1 more Smart Citation
“…The result is a variant of [6, Theorem 1] and was explicitly presented in [10, Theorem 3.2 and Corollary 3.3] using the Bruhat decomposition for matrices. It is also an immediate consequence of the more general results [20,Theorem 2] or [20,Theorem 6].…”
Section: Block Codesmentioning
confidence: 67%
“…The perspective is how the original error is transferred into the propagated errors based on different distance metrics. Martínez-Peñas [25] compares Hamming metric and the rank metric in network coding. X is transmitted with network coding scheme.…”
Section: Existing Error-correcting Methods In Network Codingmentioning
confidence: 99%