A reinforcement algorithm introduced by Simon (Biometrika 42(3/4):425–440, 1955) produces a sequence of uniform random variables with long range memory as follows. At each step, with a fixed probability $$p\in (0,1)$$
p
∈
(
0
,
1
)
, $${\hat{U}}_{n+1}$$
U
^
n
+
1
is sampled uniformly from $${\hat{U}}_1, \ldots , {\hat{U}}_n$$
U
^
1
,
…
,
U
^
n
, and with complementary probability $$1-p$$
1
-
p
, $${\hat{U}}_{n+1}$$
U
^
n
+
1
is a new independent uniform variable. The Glivenko–Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when $$p<1/2$$
p
<
1
/
2
, and that a further rescaling is needed when $$p>1/2$$
p
>
1
/
2
and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlated Bernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.