We address the problem of recovering a continuous-time (space) signal from several blurred and noisy sampled versions of it, a scenario commonly encountered in super-resolution (SR) and arrayprocessing. We show that discretization, a key step in many SR algorithms, inevitably leads to inaccurate modeling. Instead, we treat the problem entirely in the continuous domain by modeling the signal as a continuous-time random process and deriving its linear minimum mean-squared error estimate given the samples. We also provide an efficient implementation scheme, valid for 1D applications. Simulation results on real-world data demonstrate the advantage of our approach with respect to SR techniques that rely on discretization.