2016 IEEE International Conference on Image Processing (ICIP) 2016
DOI: 10.1109/icip.2016.7532867
|View full text |Cite
|
Sign up to set email alerts
|

Visual attention inspired distant view and close-up view classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
18
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(18 citation statements)
references
References 9 publications
0
18
0
Order By: Relevance
“…(2) In order to classify the view-type, we model the focus and scale cues inspired by human visual attention, denoted as NSCT+SURF and AdobeBING+CNN respectively. This proposal shows superior performance over other baselines, with or without the HVS cues, and achieves the improvement from 84.00% in our prior framework [18] to 93.17%. (3) Due to the relatively new nature of the view-type classification problem in the field, we have also established a new benchmark containing 5050 natural narrow-view and wide-view images to facilitate our investigation and evaluate the proposed framework.…”
mentioning
confidence: 80%
See 4 more Smart Citations
“…(2) In order to classify the view-type, we model the focus and scale cues inspired by human visual attention, denoted as NSCT+SURF and AdobeBING+CNN respectively. This proposal shows superior performance over other baselines, with or without the HVS cues, and achieves the improvement from 84.00% in our prior framework [18] to 93.17%. (3) Due to the relatively new nature of the view-type classification problem in the field, we have also established a new benchmark containing 5050 natural narrow-view and wide-view images to facilitate our investigation and evaluate the proposed framework.…”
mentioning
confidence: 80%
“…In our previous work, we proposed a framework specifically designed with the inspiration from the visual attention in HVS [18], i.e. focus and scale cues, in performing view-type classification.…”
Section: Visual Attention Inspired Modelingmentioning
confidence: 99%
See 3 more Smart Citations