For many causal effect parameters ψ of interest doubly robust machine learning (DR-ML) (Chernozhukov et al., 2018a) estimators ψ1 are the state-of-the-art, incorporating the benefits of the low prediction error of machine learning (ML) algorithms; the decreased bias of doubly robust estimators; and.the analytic tractability and bias reduction of sample splitting with cross fitting. Nonetheless, even in the absence of confounding by unmeasured factors, when the vector of potential confounders is high dimensional, the associated (1 − α) Wald confidence intervals ψ1 ± z α/2 se[ ψ1] may still undercover even in large samples, because the bias of the estimator may be of the same or even larger order than its standard error of order n −1/2 .In this paper, we introduce novel tests that (i) can have the power to detect whether the bias of ψ1 is of the same or even larger order than its standard error of order n −1/2 , (ii) can provide a lower confidence limit on the degree of under coverage of the interval ψ1 ± z α/2 se[ ψ1] and (iii) strikingly, are valid under essentially no assumptions whatsoever. We also introduce an estimator ψ2 = ψ1 − IF22 with bias generally less, and often much less, than that of ψ1, yet whose standard error is not much greater than ψ1's. The tests, as well as the estimator ψ2, are based on a U-statistic IF22 that is the second-order influence function for the parameter that encodes the estimable part of the bias of ψ1. For the definition and theory of higher order influence functions see Robins et al. (2008. When the covariance matrix of the potential confounders is known, IF22 is an unbiased estimator of its parameter. When the covariance matrix is unknown, we propose several novel estimators of IF22 that perform almost as well as the known covariance case in simulation experiments.Our impressive claims need to be tempered in several important ways. First no test, including ours, of the null hypothesis that the ratio of the bias to its standard error can be consistent [without making additional assumptions (e.g. smoothness or sparsity) that may be incorrect]. Furthermore the above claims only apply to parameters in a particular class. For the others, our results are unavoidably less sharp and require more careful interpretation.