Active debris removal and unmanned on-orbit servicing missions have gained interest in the last few years, along with the possibility to perform them through the use of an autonomous chasing spacecraft. In this work, new resources are proposed to aid the implementation of guidance, navigation and control algorithms for satellites devoted to the inspection of non-cooperative targets before any proximity operation is initiated. In particular, the use of Convolutional Neural Networks (CNNs) performing object detection and instance segmentation is proposed and its effectiveness in recognizing components and parts of the target satellite is evaluated. Yet no reliable training images dataset of this kind exists to date. A tailored and publicly available software has been developed to overcome this limitation by generating synthetic images. Computer Aided Design models of existing satellites are loaded on a 3-D animation software and used to programmatically render images of the objects from different point of views and in different lighting conditions, together with the necessary ground truth labels and masks for each image. The results show how a relatively low number of iterations is sufficient for a CNN trained on such datasets to reach a mean average precision value in line with state-of-the-art-performances achieved by CNNs in common datasets. An assessment of the performance of the neural network when trained on different conditions is provided.To conclude, the method is tested on real images from the MEV-1 on-orbit servicing mission, showing that using only artificially generated images to train the model does not compromise the learning process.