Abstract-1 mean filtering is a conventional, optimizationbased method to estimate the positions of jumps in a piecewise constant signal perturbed by additive noise. In this method, the 1 norm penalizes sparsity of the first-order derivative of the signal. Theoretical results, however, show that in some situations, which can occur frequently in practice, even when the jump amplitudes tend to ∞, the conventional method identifies false change points. This issue is referred to as stair-casing problem and restricts practical importance of 1 mean filtering. In this paper, sparsity is penalized more tightly than the 1 norm by exploiting a certain class of nonconvex functions, while the strict convexity of the consequent optimization problem is preserved. This results in a higher performance in detecting change points. To theoretically justify the performance improvements over 1 mean filtering, deterministic and stochastic sufficient conditions for exact change point recovery are derived. In particular, theoretical results show that in the stair-casing problem, our approach might be able to exclude the false change points, while 1 mean filtering may fail. A number of numerical simulations assist to show superiority of our method over 1 mean filtering and another state-of-theart algorithm that promotes sparsity tighter than the 1 norm. Specifically, it is shown that our approach can consistently detect change points when the jump amplitudes become sufficiently large, while the two other competitors cannot.