2003
DOI: 10.1109/tit.2002.806118
|View full text |Cite
|
Sign up to set email alerts
|

Hardness of approximating the minimum distance of a linear code

Abstract: We show that the minimum distance of a linear code is not approximable to within any constant factor in random polynomial time (RP), unless nondeterministic polynomial time (NP) equals RP. We also show that the minimum distance is not approximable to within an additive error that is linear in the block length of the code. Under the stronger assumption that NP is not contained in random quasi-polynomial time (RQP), we show that the minimum distance is not approximable to within the factor 2 log ( ) , for any 0.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
109
0
1

Year Published

2005
2005
2014
2014

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 117 publications
(112 citation statements)
references
References 20 publications
2
109
0
1
Order By: Relevance
“…• The probabilistic construction of a similar gadget for linear codes used in [11] to prove the NP-hardness of MDP has been successfully derandomized [8]. This provides hope that a derandomization of the lattice gadget employed in this paper may be possible too.…”
Section: Introductionmentioning
confidence: 84%
See 3 more Smart Citations
“…• The probabilistic construction of a similar gadget for linear codes used in [11] to prove the NP-hardness of MDP has been successfully derandomized [8]. This provides hope that a derandomization of the lattice gadget employed in this paper may be possible too.…”
Section: Introductionmentioning
confidence: 84%
“…Tensoring immediately yields inapproximability results for SVP within larger factors, still under randomized reductions with one-sided error. Moreover, our basic NP-hardness proof-within small approximation factors-is very similar to those in [20,11] so, as explained in the previous paragraphs, it may be more easily derandomized. We remark that the standard tensor product operation amplifies the approximation factor for any instance of the SVP variant defined in this paper, not just for the output of our basic reduction.…”
Section: Introductionmentioning
confidence: 89%
See 2 more Smart Citations
“…For example, the tensor product of linear codes is used to amplify the NP-hardness of approximating the minimum distance in a linear code of block length n to arbitrarily large constants under polynomial-time reductions and to 2 (log n) 1−ε (for any ε > 0) under quasipolynomial-time reductions [16]. This example motivates one to use the tensor product of lattices to increase the hardness factor known for approximating SVP.…”
Section: Techniquesmentioning
confidence: 99%