2013
DOI: 10.1007/978-3-642-38348-9_2
|View full text |Cite
|
Sign up to set email alerts
|

Lossy Codes and a New Variant of the Learning-With-Errors Problem

Abstract: The hardness of the Learning-With-Errors (LWE) Problem has become one of the most useful assumptions in cryptography. It exhibits a worst-to-average-case reduction making the LWE assumption very plausible. This worst-to-average-case reduction is based on a Fourier argument and the errors for current applications of LWE must be chosen from a gaussian distribution. However, sampling from gaussian distributions is cumbersome. In this work we present the first worst-to-average case reduction for LWE with uniformly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
53
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 38 publications
(56 citation statements)
references
References 23 publications
3
53
0
Order By: Relevance
“…Our construction uses private randomness, which is allowed in the fuzzy extractor setting but not in noiseless randomness extraction. Second, the code used is a random linear code, which allows us to use the Learning with Errors (LWE) assumption due to Regev [Reg05,Reg10] and derive a longer key r. Specifically, we use the recent result of Döttling and Müller-Quade [DMQ13], which shows the hardness of decoding random linear codes when the error vector comes from the uniform distribution, with each coordinate ranging over a small interval. This allows us to use w as the error vector, assuming it is uniform.…”
Section: Our Negative Resultsmentioning
confidence: 99%
“…Our construction uses private randomness, which is allowed in the fuzzy extractor setting but not in noiseless randomness extraction. Second, the code used is a random linear code, which allows us to use the Learning with Errors (LWE) assumption due to Regev [Reg05,Reg10] and derive a longer key r. Specifically, we use the recent result of Döttling and Müller-Quade [DMQ13], which shows the hardness of decoding random linear codes when the error vector comes from the uniform distribution, with each coordinate ranging over a small interval. This allows us to use w as the error vector, assuming it is uniform.…”
Section: Our Negative Resultsmentioning
confidence: 99%
“…Beside the use of a similar high level lossyness argument, the technical details of the proof are quite different, and the results are different as well. Just like our work, [11] proves hardness for uniformly distributed errors, and requires the number of m of samples to be fixed in advance. However, [11] requires the noise bound to be bigger than √ n (in fact, at least m √ n, where m is the number of samples,) while in our work the errors can be smaller than √ n, or even binary.…”
Section: Techniques and Comparison To Related Workmentioning
confidence: 90%
“…Just like our work, [11] proves hardness for uniformly distributed errors, and requires the number of m of samples to be fixed in advance. However, [11] requires the noise bound to be bigger than √ n (in fact, at least m √ n, where m is the number of samples,) while in our work the errors can be smaller than √ n, or even binary. On the other hand, when the magnitude of the errors is large √ nm √ n, [11] allows the number of samples m = n O (1) to be an arbitrary large polynomial, while here we require it to be linear m = Θ(n).…”
Section: Techniques and Comparison To Related Workmentioning
confidence: 90%
See 2 more Smart Citations