Fusion of multimodal imaging data supports medical experts with ample information for better disease diagnosis and further clinical investigations. Recently, sparse representation (SR)‐based fusion algorithms has been gaining importance for their high performance. Building a compact, discriminative dictionary with reduced computational effort is a major challenge to these algorithms. Addressing this key issue, we propose an adaptive dictionary learning approach for fusion of multimodal medical images. The proposed approach consists of three steps. First, zero informative patches of source images are discarded by variance computation. Second, the structural information of remaining image patches is evaluated using modified spatial frequency (MSF). Finally, a selection rule is employed to separate the useful informative patches of source images for dictionary learning. At the fusion step, batch‐OMP algorithm is utilized to estimate the sparse coefficients. A novel fusion rule which measures the activity level in both spatial domain and transform domain is adopted to reconstruct the fused image with the sparse vectors and trained dictionary. Experimental results of various medical image pairs and clinical data sets reveal that the proposed fusion algorithm gives better visual quality and competes with existing methodologies both visually and quantitatively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.