Difficulties with inference in predictive regressions are generally attributed to strong persistence in the predictor series. We show that the major source of the problem is actually the nuisance intercept parameter, and we propose basing inference on the restricted likelihood, which is free of such nuisance location parameters and also possesses small curvature, making it suitable for inference. The bias of the restricted maximum likelihood (REML) estimates is shown to be approximately 50% less than that of the ordinary least squares (OLS) estimates near the unit root, without loss of efficiency. The error in the chi-square approximation to the distribution of the REML-based likelihood ratio test (RLRT) for no predictability is shown to be $({\textstyle{3 \over 4}} - \rho ^2)n^{ - 1} (G_3 (\cdot) - G_1 (\cdot)) + O(n^{ - 2}),$ where |ρ| < 1 is the correlation of the innovation series and Gs(·) is the cumulative distribution function (c.d.f.) of a $\chi _s^2 $ random variable. This very small error, free of the autoregressive (AR) parameter, suggests that the RLRT for predictability has very good size properties even when the regressor has strong persistence. The Bartlett-corrected RLRT achieves an O(n−2) error. Power under local alternatives is obtained, and extensions to more general univariate regressors and vector AR(1) regressors, where OLS may no longer be asymptotically efficient, are provided. In simulations the RLRT maintains size well, is robust to nonnormal errors, and has uniformly higher power than the Jansson and Moreira (2006, Econometrica 74, 681–714) test with gains that can be substantial. The Campbell and Yogo (2006, Journal of Financial Econometrics 81, 27–60) Bonferroni Q test is found to have size distortions and can be significantly oversized.