Local measurements of the Hubble constant ($$H_0$$
H
0
) based on Cepheids e Type Ia supernova differ by $$\approx 5 \sigma $$
≈
5
σ
from the estimated value of $$H_0$$
H
0
from Planck CMB observations under $$\Lambda $$
Λ
CDM assumptions. In order to better understand this $$H_0$$
H
0
tension, the comparison of different methods of analysis will be fundamental to interpret the data sets provided by the next generation of surveys. In this paper, we deploy machine learning algorithms to measure the $$H_0$$
H
0
through a regression analysis on synthetic data of the expansion rate assuming different values of redshift and different levels of uncertainty. We compare the performance of different regression algorithms as Extra-Trees, Artificial Neural Network, Gradient Boosting, Support Vector Machines, and we find that the Support Vector Machine exhibits the best performance in terms of bias-variance tradeoff in most cases, showing itself a competitive cross-check to non-supervised regression methods such as Gaussian Processes.