Numerous problems in signal processing and imaging, statistical learning and data mining, or computer vision can be formulated as optimization problems which consist in minimizing a sum of convex functions, not necessarily differentiable, possibly composed with linear operators and that in turn can be transformed to split feasibility problems (SFP), see for example [5]. Each function is typically either a data fidelity term or a regularization term enforcing some properties on the solution, see for example [9] and references therein. In this paper we are interested in Split Feasibility Problems which can be seen as a general form of Q-Lasso introduced in [1] that extended the well-known Lasso of Tibshirani [24]. Q is a closed convex subset of a Euclidean m-space, for some integer m ≥ 1, that can be interpreted as the set of errors within given tolerance level when linear measurements are taken to recover a signal/image via the Lasso. Inspired by recent works by Lou et al [16,26], we are interested in a nonconvex regularization of SFP and propose three split algorithms for solving this general case. The first one is based on the DC (difference of convex) algorithm (DCA) introduced by Pham Dinh Tao, the second one in nothing else than the celebrate forward-backward algorithm and the third one uses a method introduced by Mine and Fukushima. It is worth mentioning that the SFP model a number of applied problems arising from signal/image processing and specially optimization problems for intensity-modulated radiation therapy (IMRT) treatment planning, see for example [4].