Sketch extraction is of great value for historians to copy and study historical painting styles. However, most of the existing sketch extraction methods can successfully perform extraction only if the sketches are well preserved, but for paintings with severe conservation issues, the extraction methods need to be improved. Therefore, we propose a sketch extraction method using spectral imaging and deep learning. Firstly, the spectral image data is collected and the bands sensitive to the sketches are extracted by using the prior knowledge of the sketches (e.g. near infrared bands will be chosen if the sketches are made of carbon ink). A publicly available image dataset of natural scenes is used to pre-train the bi-directional cascade network (BDCN). The network parameters in the model are then fine-tuned by using the sketches drawn by experts based on images of painted cultural objects, so as to solve the problem of insufficient sketch dataset of painted cultural objects and enhance the generalization ability of the model. Finally, the U-Net is used to further suppress unwanted information, to make the sketch clearer. Experimental results show that the proposed method can extract clear sketches even with faded paintings and the presence of unwanted information or instrumental noise. It is superior to the other six advanced extraction methods in visual and objective comparison. The proposed deep learning method is also compared with an unsupervised clustering method using Self-Organising Map (SOM) which is a 'shallow learning' method where pixels of similar spectra are grouped into clusters without the need for data labeling by experts.