2021
DOI: 10.4236/jcc.2021.96005
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Sensor Image Fusion: A Survey of the State of the Art

Abstract: Image fusion has been developing into an important area of research. In remote sensing, the use of the same image sensor in different working modes, or different image sensors, can provide reinforcing or complementary information. Therefore, it is highly valuable to fuse outputs from multiple sensors (or the same sensor in different working modes) to improve the overall performance of the remote images, which are very useful for human visual perception and image processing task. Accordingly, in this paper, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 162 publications
(136 reference statements)
0
11
0
Order By: Relevance
“…86 However, the approaches based on sparse-based fusion experienced issues with synchronization of sparsity level to loss of relevant information in the medical image recovery process and the problem of quality of the learned dictionary. [87][88][89] In this paper, a modified sparsity-based image fusion framework was proposed. First, 16 natural grayscale images were used to train the dictionary of dimension 64 × 128 with a five per atom sparsity level.…”
Section: Details Of Experiments Done On 38 Brain Tumor Patientsmentioning
confidence: 99%
“…86 However, the approaches based on sparse-based fusion experienced issues with synchronization of sparsity level to loss of relevant information in the medical image recovery process and the problem of quality of the learned dictionary. [87][88][89] In this paper, a modified sparsity-based image fusion framework was proposed. First, 16 natural grayscale images were used to train the dictionary of dimension 64 × 128 with a five per atom sparsity level.…”
Section: Details Of Experiments Done On 38 Brain Tumor Patientsmentioning
confidence: 99%
“…Li B et al, to solve the problem that the mean-shift algorithm cannot update the target model in realtime, improved the mean-shift algorithm and proposed the Camshift algorithm, which is a tracking algorithm based on color probability distribution. is algorithm is suitable for tracking scenarios where the target color is single and has a large color difference with the background, but not for target tracking scenarios where the target and background colors are similar or the background is complex and the target texture is rich [19]. Luo L et al proposed a continuous adaptive mean drift tracking algorithm with a background suppression histogram model, which improves the tracking accuracy and stability by suppressing the hues belonging to the background in the original color model [20].…”
Section: Panoramic Video Multitarget Real-time Trackingmentioning
confidence: 99%
“…2) Feature-level fusion: Feature-level fusion first extracts the relevant features of each source data, such as shape, size, texture, etc., then feature fusion processing is performed on the features to generate new features, and finally object interpretation work is performed. At present, the existing feature fusion techniques can be divided into two main categories [29]: feature selection-based and feature extractionbased. In feature selection-based fusion techniques, all the features are first assembled together, and then feature selection is performed using a suitable method and finally used for object interpretation.…”
Section: Introductionmentioning
confidence: 99%
“…Decision-level fusion can use a variety of logical inference methods, statistical methods, information theory methods, etc., mainly including Bayesian inference, Dempster-Shafer (D-S) evidence theory [33], fuzzy decision making, neural networks, etc. [29]. Rastiveis [34] proposed a three-level fusion algorithm based on Bayesian theory for LiDAR data and aerial imagery to improve urban object coverage classification accuracy.…”
Section: Introductionmentioning
confidence: 99%