Whether in medical imaging, astronomy or remote sensing, the data are increasingly complex. In addition to the spatial dimension, the data may contain temporal or spectral information that characterises the different sources present in the image. The compromise between spatial resolution and temporal/spectral resolution is often at the expense of spatial resolution, resulting in a potentially large mixing of sources in the same pixel/voxel. Source separation methods must incorporate spatial information to estimate the contribution and signature of each source in the image. We consider the particular case where the position of the sources is approximately known thanks to external information that may come from another imaging modality or from a priori knowledge. We propose a spatially constrained dictionary learning source separation algorithm that uses e.g. high resolution segmentation map or regions of interest defined by an expert to regularise the source contribution estimation. The originality of the proposed model is the replacement of the sparsity constraint classically expressed in the form of an 1 penalty on the localisation of sources by an indicator function exploiting the external source localisation information.The model is easily adaptable to different applications by adding or modifying the constraints on the sources properties in the optimisation problem. The performance of this algorithm has been validated on synthetic and quasi-real data, before being applied on real data previously analysed by other methods of the literature in order to compare the results. To illustrate the potential of the approach, different applications have been considered, from scintigraphic data to astronomy or fMRI data.