Spatial enhancement of low-resolution hyperspectral imagery using high-resolution multispectral imagery is often done via image fusion algorithms. Regardless of the algorithm used, pixels containing edges, corners, shadows, dark/low-contrast materials, etc., present the most challenge to any algorithm. Due to this, confidence on sharpening results are often low at these 'trouble pixels'. This paper presents our initial experiments and results in leveraging spatial information to drive and improve the fusion process. We present an adaptive algorithm workflow that adjusts to the spatial conditions identified for those pixels. We also designed a novel edge detection scheme based on spectral angle calculations on either high-or low-resolution imagery. Target signatures were synthetically implanted on pixels identified as strong edges, and an ACE detector is run on all fused and reference imagery. Our results show that, based on calculated ACE target detection ROC curves, modifying the NNDiffuse algorithm to include factors that leverage spatial features (i.e., spectral differences between neighboring pixels, differences in 'edgeness' of neighboring pixels) produced significant improvements in detection rates compared to the classical (non-modified) NNDiffuse algorithm.