The objective of this work is to reduce the cost of performing model-based sensitivity analysis for ultrasonic nondestructive testing systems by replacing the accurate physics-based model with machine learning (ML) algorithms and quickly compute Sobol' indices. The ML algorithms considered in this work are neural networks (NN), convolutional NN (CNN), and deep Gaussian processes (DGP). The performance of these algorithms is measured by the root mean squared error on a fixed number of testing points and by the number of high-fidelity samples required to reach a target accuracy. The algorithms are compared on three ultrasonic testing benchmark cases with three uncertainty parameters, namely, spherically-void defect under a focused and a planar transducer and spherical-inclusion defect under a focused transducer. The results show that NN required 35, 100, and 35 samples for the three cases, respectively. CNN required 35, 100, and 56, respectively, while DGP required 84, 84, and 56, respectively.