Abstract. We present a theoretical study of the recovery of an unknown vector x ∈ R p (such as a signal or an image) from noisy data y ∈ R q by minimizing with respect to x a regularized costfunction F (x, y) = Ψ(x, y) + αΦ(x), where Ψ is a data-fidelity term, Φ is a smooth regularization term, and α > 0 is a parameter. Typically, Ψ(x, y) = Ax − y 2 , where A is a linear operator. The data-fidelity terms Ψ involved in regularized cost-functions are generally smooth functions; only a few papers make an exception to this and they consider restricted situations. Nonsmooth data-fidelity terms are avoided in image processing. In spite of this, we consider both smooth and nonsmooth data-fidelity terms. Our goal is to capture essential features exhibited by the local minimizers of regularized cost-functions in relation to the smoothness of the data-fidelity term.In order to fix the context of our study, we consider Ψ(x, y) = i ψ(a T i x − y i ), where a T i are the rows of A and ψ is C m on R \ {0}. We show that if ψ (0 − ) < ψ (0 + ), then typical data y give rise to local minimizersx of F (., y) which fit exactly a certain number of the data entries: there is a possibly large setĥ of indexes such that a T ix = y i for every i ∈ĥ. In contrast, if ψ is smooth on R, for almost every y, the local minimizers of F (., y) do not fit any entry of y. Thus, the possibility that a local minimizer fits some data entries is due to the nonsmoothness of the data-fidelity term. This is a strong mathematical property which is useful in practice. By way of application, we construct a cost-function allowing aberrant data (outliers) to be detected and to be selectively smoothed. Our numerical experiments advocate the use of nonsmooth data-fidelity terms in regularized cost-functions for special purposes in image and signal processing.