2020
DOI: 10.1049/iet-ipr.2020.0773
|View full text |Cite
|
Sign up to set email alerts
|

Fast visual saliency based on multi‐scale difference of Gaussians fusion in frequency domain

Abstract: To reduce the computation required in determining the proper scale of salient object, a fast visual saliency based on multi-scale difference of Gaussians fusion in frequency domain (MDF) is proposed. First, based on the phenomenon that the foreground energy is highlighted and densely distributes on certain band of spectrum, the scale coefficients of foreground in an image can be literately approximated on the amplitude spectrum. Next, relying on the linear integration property of Fourier transform, the feature… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…As the visualization results show in Figure 5 , four SOTA saliency detection models, a Deep Hierarchical Saliency Network (DHSNet) [ 36 ], a U 2 -Net [ 14 ], a Multi-Scale Difference of Gaussians Fusion in Frequency (MDF) [ 37 ], and Wavelet Integrated Deep Networks for Image Segmentation (WaveSNet) [ 38 ] were used in the experiment to compare with our FESNet. We completed a verification of the datasets (GrainPest and SOC).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…As the visualization results show in Figure 5 , four SOTA saliency detection models, a Deep Hierarchical Saliency Network (DHSNet) [ 36 ], a U 2 -Net [ 14 ], a Multi-Scale Difference of Gaussians Fusion in Frequency (MDF) [ 37 ], and Wavelet Integrated Deep Networks for Image Segmentation (WaveSNet) [ 38 ] were used in the experiment to compare with our FESNet. We completed a verification of the datasets (GrainPest and SOC).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…The coordinates of the feature maps and the resized saliency map are denoted by i and j . In this study, the method of Li et al [ 79 ] was applied to determine the saliency map of a video frame due to its low computational costs. Figure 3 depicts several video frames and their saliency maps.…”
Section: Proposed Methodsmentioning
confidence: 99%
“… Illustration of saliency map extraction: ( a , c , e , g ) input video frames and ( b , d , f , h ) saliency maps of the input video frames obtained by the method of Li et al [ 79 ]. …”
Section: Figurementioning
confidence: 99%
“…Compared with RGB space, the color distribution of LAB space is more abundant and uniform in visual sense, which is more suitable for saliency detection. Since saliency models [13][17] [18] based on frequency domain could segment regions with better edge effects than spatial domain models, we introduce the multiscale Difference of Gaussians fusion [19] to extract the feature spectral M of LAB space in frequency domain:…”
Section: Dehaze-driven Saliency Detectionmentioning
confidence: 99%