Convolutional neural networks (CNNs) provide the sensing and detection community with a discriminative approach for classifying images. However, one of the largest limitations for deep CNN image classifiers is the need for extensive training datasets containing a variety of image representations. While current methods such as GAN data augmentation, additions of noise, rotations, and translations can allow CNNs to better associate new images and their feature representations to ones of a learned image class, many fail to provide new contexts of ground truth feature information. To expand the association of critical class features within CNN image training datasets, an image pairing and training dataset augmentation paradigm via a multi-sensor domain image data fusion algorithm is proposed. This algorithm uses a mutual information and merit-based feature selection subroutine to pair highly correlated cross domain images from multiple sensor domain image datasets. It then re-augments the corresponding cross domain image pairs into the opposite sensor domain's feature set via a highest mutual information, cross sensor domain, image concatenation function. This augmented image set then acts to retrain the CNN to recognize greater generalizations of image class features via cross-domain, mixed representations. Experimental results indicated an increased ability of CNNs to generalize and discriminate between image classes during testing of class images from SAR vehicle, solar cell device reliability screening and lung cancer detection image datasets.