2020
DOI: 10.1016/j.procs.2020.03.057
|View full text |Cite
|
Sign up to set email alerts
|

Pseudo Random Number Generation: a Reinforcement Learning approach

Abstract: Pseudo-Random Numbers Generators (PRNGs) are algorithms produced to generate long sequences of statistically uncorrelated numbers, i.e. Pseudo-Random Numbers (PRNs). These numbers are widely employed in mid-level cryptography and in software applications. Test suites are used to evaluate PRNGs quality by checking statistical properties of the generated sequences. Machine learning techniques are often used to break these generators, for instance approximating a certain generator or a certain sequence using a ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 7 publications
0
19
0
Order By: Relevance
“…Most digital logic and computing systems are base-2, but algorithms such as Cooley-Tukey [17] offer equivalent mixed-radix representations to achieve the same overall calculation, but as a parallel composition of many small RNS operations. In particular, most PRNG methods were designed to produce bistate binary values, and methods to quantify PRNG output randomness usually require their binary form [18]. Consequently, the most common output of a PRNG is a k-bit binary word that may be viewed as an element r ∈ Z 2 k .…”
Section: Introductionmentioning
confidence: 99%
“…Most digital logic and computing systems are base-2, but algorithms such as Cooley-Tukey [17] offer equivalent mixed-radix representations to achieve the same overall calculation, but as a parallel composition of many small RNS operations. In particular, most PRNG methods were designed to produce bistate binary values, and methods to quantify PRNG output randomness usually require their binary form [18]. Consequently, the most common output of a PRNG is a k-bit binary word that may be viewed as an element r ∈ Z 2 k .…”
Section: Introductionmentioning
confidence: 99%
“…A Deep Reinforcement Learning (DRL) pipeline can then be used on this MDP to train a PRNG agent. This DRL approach has been used for the first time in [6] with promising results, see modeling details in Section 2.2. This is a probabilistic approach that generates pseudo-random numbers with a "variable period", because the learned policy will generally be stochastic.…”
Section: Introductionmentioning
confidence: 99%
“…However, the MDP formulation in [6] has an action set where the size grows linearly with the length of the sequence. This is a severe limiting factor, because when the action set is above a certain size it becomes very difficult, if not impossible, for an agent to explore the action space within a reasonable time.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Such ideal random sequences can easily be produced from natural resources, for example, atmospheric noises, radioactive decay and other natural phenomena. However, reproducibility of such sequences is impossible mathematically because of variation in natural resources [4,5]. Due to this disadvantage, sources of such true random sequences are unreliable for practical computer applications.…”
Section: Introductionmentioning
confidence: 99%