This article is motivated by the need to bridge some gap between modern asset pricing theory and recent developments in econometric methodology. While asset pricing theory enhances the use of conditional pricing models, econometric inference of conditional models can be challenging due to misspecification or weak identification. To tackle the case of misspecification, we utilize the conditional Hansen and Jagannathan (1997) (HJ) distance as studied by Gagliardini and Ronchetti (2016), but we set the focus on interpretation and estimation of the pseudo-true value defined as the argument of the minimum of this distance. While efficient Generalized Method of Moments (GMM) has no meaning for estimation of a pseudo-true value, the HJ-distance not only delivers a meaningful loss function, but also features an additional advantage for the interpretation and estimation of managed portfolios whose exact pricing characterizes the pseudo-true pricing kernel (stochastic discount factor (SDF)). For conditionally affine pricing kernels, we can display some managed portfolios which are well-defined independently of the pseudo-true value of the parameters, although their exact pricing is achieved by the pseudo-true SDF. For the general case of nonlinear SDFs, we propose a smooth minimum distance (SMD) estimator (Lavergne and Patilea, 2013) that avoids a focus on specific directions as in the case of managed portfolios. Albeit based on kernel smoothing, the SMD approach avoids instabilities and the resulting need of trimming strategies displayed by classical local GMM estimators when the density function of the conditioning variables may take arbitrarily small values. In addition, the fact that SMD may allow fixed bandwidth asymptotics is helpful regarding the curse of dimensionality. In contrast with the true unknown value for a well-specified model, the estimated pseudo-true value, albeit defined in a time-invariant (unconditional) way, may actually depend on the choice of the state variables that define fundamental factors and their scaling weights. Therefore, we may not want to be overly parsimonious about the set of explanatory variables. Finally, following Antoine and Lavergne (2014), we show how SMD can be further robustified to deal with weaker identification contexts. Since SMD can be seen as a local extension of the method of jackknife GMM (Newey and Windmeijer, 2009), we characterize the Gaussian * We gratefully acknowledge very helpful comments by Lars Peter Hansen and Sydney Ludvigson on an earlier draft of this article. We are also grateful to participants of the 2017 EC 2 conference in Amsterdam for insightful questions and remarks.