With the advancement in medical imaging, far reaching changes are perceptible in clinical analysis. Most of the diagnosed medical evaluation is a resultant of imaging or it is in close synchronization with imaging techniques. This promotes the need to closely read, evaluate and collaborate images. Medical image fusion is a technique to collaborate the clinical images acquired from one or many modalities. Multimodal image analysis during fusion well capitalizes the strength of each medical modality. Incorporating the features from multimodal input images thus holds added potential to abet better-quality diagnosis. Medical image fusion is an intricate task specially when high quality fused image, possessing all relevant information and reasonable operating speed is aimed. Many efforts have been undertaken in this field resulting in diverse research approaches. Image fusion can be performed using medical images obtained from single modality or from multiple modalities. This paper has been designed collaborating the work based on multimodal medical images in multiscale image fusion domain. When the same region, organ or a tissue is captured from various different perceptions, complimentary information is maximized and diagnostic value is reinforced. The fusion framework using Mexican Hat wavelet is proposed using adaptive median filtering detailing each executable fusion block. The Fusion techniques, pre and post processing aspects and evaluation mechanisms are illustrated from literature. The drifts of researchers from single processing to multiple processing hybrid techniques are discussed. The medical modality aspects are detailed. These may provide as a valuable reference to understand the image fusion trade-offs comprehensively with future viabilities.