While it is well-known that neural networks enjoy excellent approximation capabilities, it remains a big challenge to compute such approximations from point samples. Based on tools from Information-based complexity, recent work by Grohs and Voigtlaender [Proof of the theory-to-practice gap in deep learning via sampling complexity bounds for neural network approximation spaces, preprint (2021), arXiv:2104.02746] developed a rigorous framework for assessing this so-called ”theory-to-practice gap”. More precisely, in that work it is shown that there exist functions that can be approximated by neural networks with ReLU activation function at an arbitrary rate while requiring an exponentially growing (in the input dimension) number of samples for their numerical computation. This study extends these findings by showing analogous results for the ReQU activation function.