The purpose of this study is to investigate spectral effectiveness, i.e., the effects of color channels, in fundus photography on deep learning classification of retinopathy of prematurity (ROP). A convolutional neural network (CNN) end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture. To interpret the deep learning performance, the Gradient-weighted Class Activation Mapping (Grad-CAM) was utilized to identify spatial characteristics for ROP classification. Comparative analysis indicates that the green or red channel can provide sufficient information for ROP stage classification, without the involvement of blue images.