UAV Videos and other remote-sensing innovations have increased the demand for multispectral image stitching methods, which can gather data on a broad area by looking at different aspects of the same scene. For large-scale hyperspectral remote-sensing images, state-of-the-art techniques frequently have accumulating errors and high processing costs. However, this research paper aims to produce high-precision multispectral mapping with minimal spatial and spectral distortion. The stitching framework was created in the following manner: First, UAV collects the raw input data, which is then labeled as a signal using a connected component labeling strategy that correlates to each pixel or label using the EEG (Alpha, Beta, Theta, and Delta) technique. Next, the feature extraction process follows a novel decortication Hydrolysis CNN approach which extracts active and passive characteristics. Then after feature extraction, a novel chromatographic classification approach is employed for separating features without overfitting. Finally, a novel yield mapping georeferencing technique is employed for all images stitched together with proper alignment and segmented overlapping fields of view. The suggested deep learning model is an effective method for real-time mosaic image feature extraction which is faster by an average of 11.5 times compared to existing approaches as noted on the samples for experimental analysis.