Thanks to wearable devices joint with AI algorithms, it is possible to record and analyse physiological parameters such as heart rate variability (HRV) in ambulatory environments. The main downside to such setups is the bad quality of recorded data due to movement, noises, and data losses. These errors may considerably alter HRV analysis and should therefore be addressed beforehand, especially if used for medical diagnosis. One widely used method to handle such problems is interpolation, but this approach does not preserve the time dependence of the signal. In this study, we propose a new method for HRV processing including filtering and iterative data imputation using a Gaussian distribution. The particularity of the method is that many physiological aspects are taken into consideration, such as HRV distribution, RR variability, and normal boundaries, as well as time series characteristics. We study the effect of this method on classification using a random forest classifier (RF) and compare it to other data imputation methods including linear, shape-preserving piecewise cubic Hermite (pchip), and spline interpolation in a case study on stress. Features from reconstructed HRV signals of 67 healthy subjects using all four methods were analysed and separately classified by a random forest algorithm to detect stress against relaxation. The proposed method reached a stable F1 score of 61% even with a high percentage of missing data, whereas other interpolation methods reached approximately 54% F1 score for a low percentage of missing data, and the performance drops to about 44% when the percentage is increased. This suggests that our method gives better results for stress classification, especially on signals with a high percentage of missing data.