Hitherto, image-level classification on remote sensing landslide images has been paid attention to, but the accuracy of traditional deep learning-based methods still have room for improvement. The evidence theory is found efficient to boost the accuracy of neural networks, however, the present study argues three challenges that hinder the lead-in of this theory in deep landslide image classification. Aiming at the three problems, this study makes three improvements. For the interpretability and decision-invariance losses of three previous divergences, we propose a Belief Jensen-Renyi divergence with properties proven. To couple the evidence theory with deep remote sensing landslide image classification, a channel-wise multi-scale visual saliency fusion is developed. We additionally find that the channel-wise fusion is capable to reduce false recognition of networks as compared with original RGB images. To avoid decision failures in evidence-theoretic fusion process, we design an interpretability improved three-branched fusion. Experiments on Bijie Landslide dataset corroborate the synergistic benefits of the three improvements, where the proposal is compared with state-ofthe-art image classification backbone networks, remote sensing image scene classifiers, evidence fusion algorithms and versatile evidence-theoretic deep learning classifiers. We also evaluated the new method with two sort of image degradation, as well as an actual scenario in Luding County, China whose data is publicly available. The source code is at https://github.com/defzhangaa/ deeplandslideDS.