Nested integration arises when a nonlinear function is applied to an integrand, and the result is integrated again, which is common in engineering problems, such as optimal experimental design, where typically neither integral has a closed-form expression. Using the Monte Carlo method to approximate both integrals leads to a double-loop Monte Carlo estimator, which is often prohibitively expensive, as the estimation of the outer integral has bias relative to the variance of the inner integrand. For the case where the inner integrand is only approximately given, additional bias is added to the estimation of the outer integral. Variance reduction methods, such as importance sampling, have been used successfully to make computations more affordable. Furthermore, random samples can be replaced with deterministic low-discrepancy sequences, leading to quasi-Monte Carlo techniques. Randomizing the low-discrepancy sequences simplifies the error analysis of the proposed double-loop quasi-Monte Carlo estimator. To our knowledge, no comprehensive error analysis exists yet for truly nested randomized quasi-Monte Carlo estimation (i.e., for estimators with low-discrepancy sequences for both estimations). We derive asymptotic error bounds and a method to obtain the optimal number of samples for both integral approximations. Then, we demonstrate the computational savings of this approach compared to standard nested (i.e., double-loop) Monte Carlo integration when estimating the expected information gain via two examples from Bayesian optimal experimental design, the latter of which involves an experiment from solid mechanics.