2021
DOI: 10.1007/s42979-021-00499-7
|View full text |Cite
|
Sign up to set email alerts
|

Off-the-Shelf Deep Features for Saliency Detection

Abstract: Computational saliency refers to the ability to highlight the salient visual information for processing. The mechanism has proven to be helpful for human as well as computer vision. Computational saliency focuses on designing algorithms which, similarly to human vision, predict which regions in a scene are salient. Recently, salient object segmentation has introduced the use of object proposals. Object proposal methods provide image segments as proposals which can be used for saliency estimation. We propose se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…Before applying the PCA algorithm for feature dimension reduction, we need to perform data preprocessing that is required for the further steps. Depending upon the n-dimensional training set x (1) , x (2) , x (3) , … x (n) , we need to perform mean normalization or feature scaling similar to the supervised learning algorithms. The mean of each feature is computed as in Eq.…”
Section: Principle Component Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Before applying the PCA algorithm for feature dimension reduction, we need to perform data preprocessing that is required for the further steps. Depending upon the n-dimensional training set x (1) , x (2) , x (3) , … x (n) , we need to perform mean normalization or feature scaling similar to the supervised learning algorithms. The mean of each feature is computed as in Eq.…”
Section: Principle Component Analysismentioning
confidence: 99%
“…In this research, we extracted deep features [3] from pretrained convolution neural network (CNN) models and concatenated the features to form a large feature space. This is followed by the application of the principal component analysis (PCA) method for dimensionality reduction of the feature space, while keeping most of the important features intact, for which we retained 99% of the variance of the feature space.…”
Section: Introductionmentioning
confidence: 99%
“…Before applying the PCA algorithm for feature dimension reduction, we need to perform data preprocessing that is required for the further steps. Depending upon the n-dimensional training set x (1) , x (2) , x (3) , . .…”
Section: Principle Component Analysismentioning
confidence: 99%
“…In this research, we extracted deep features [3,5] from pre-trained CNN models and concatenated the features to form a large feature space. This is followed by the application of the Principal Component Analysis (PCA) method for dimensionality reduction of the feature space, while keeping most of the important features intact, for which we retained 99% of the variance of the feature space.…”
Section: Introductionmentioning
confidence: 99%