The recent paper of Meng, "3D potential field data inversion with L0 quasi-norm sparse constraints" discuses application of a L0-norm constraint for reconstruction from potential field data. The L0-norm stabilizer makes it possible to reconstruct a sparse solution by the inversion algorithm. While the paper is very interesting, some aspects presented in the paper should be clarified. Most significantly, the L0-norm stabilizer was introduced as the compactness constraint by Last and Kubik (1983) for the inversion of gravity data. The method was further developed by Portniaguine and Zhdanov (1999) through the addition of prior model information, leading to the minimum support constraint. The combination of the L0-norm stabilizer with depth weighting has subsequently been used by a number of authors, as referenced, for example, in Pilkington (2009) and Vatankhah, Ardestani and Renaut (2015). The motivation for the L0-norm constraint presented in Meng (2018) is very close to that of the compactness or minimum support constraints. For the benefit of other readers in the following brief note, we expand on the relationship between these constraint conditions.The L0-norm constraint was initially used in Last and Kubik (1983) for the reconstruction of a compact gravity model. Compactness is interpreted to mean a solution, which is sparse such that the number of non-zero values of the model parameters is minimized. Last and Kubik (1983) recognized that the L0 norm of the model parameters, m L0 , can be approximated via a weighted L2 norm of the model parameters, Wm L2 . Matrix W is a parameter-dependent diagonal matrix W = diag(m 2 + σ 2 ) −1 , which is updated at each iteration using model parameters obtained at the previous iteration. Specifically, adopting the notation of Meng (2018), Last and Kubik (1983) used(1) * E-mail: svatan@ut.ac.ir in whichleads to the L0 constraint. Modifying (1) by introducing the prior model m apr yields f (m) = m − m apr 2 m − m apr 2 + σ 2 ,and provides the minimum support constraint that was introduced by Portniaguine and Zhdanov (1999). This minimizes the total volume for which the difference to the prior model is non-zero. In contrast, Meng (2018) introduced the functionwhich imposes sparsity on model parameters in a new way. Both methodologies replace the L0 norm of the model parameters with an approximation of the L0 norm, an L0 quasi norm, that strongly depends on the parameter σ . Small values σ yield sparse solutions, while as σ increases the solutions become smoother. Figure 1 illustrates both functions for different values of σ , demonstrating that, for large σ , both functions are quadratic and then provide smooth solutions, yet f 1 σ (m) is smoother than f 2 σ (m). Specifically, for large σ more weight is imposed on the large elements of the parameter vector m in f 1 σ (m), as compared with f 2 σ (m), yielding a larger penalty on these large elements than on the small elements, and thus providing solutions that are smoother. For small σ , both functions exhibit a similar...