2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00217
|View full text |Cite
|
Sign up to set email alerts
|

Salience Guided Depth Calibration for Perceptually Optimized Compressive Light Field 3D Display

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 43 publications
0
12
0
Order By: Relevance
“…In addition, proposed a unified architecture based on weighted sparse coding. Later methods [Zhang et al, 2015;Sheng et al, 2016;Wang et al, 2017a;Zhang et al, 2017a;Wang et al, 2018a;Wang et al, 2018b] explored and further combined multiple visual cues (e.g., depth, color contrast, light field flows and boundary prior) to detect saliency. Most recent methods shifted more attention to depth information and employed cellular automata for the saliency detection in light field.…”
Section: Light Field Sodmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, proposed a unified architecture based on weighted sparse coding. Later methods [Zhang et al, 2015;Sheng et al, 2016;Wang et al, 2017a;Zhang et al, 2017a;Wang et al, 2018a;Wang et al, 2018b] explored and further combined multiple visual cues (e.g., depth, color contrast, light field flows and boundary prior) to detect saliency. Most recent methods shifted more attention to depth information and employed cellular automata for the saliency detection in light field.…”
Section: Light Field Sodmentioning
confidence: 99%
“…Generally, there are three main categories of attention mechanisms, including: channel attention , spatial attention [Woo et al, 2018], and self-attention [Wang et al, 2018c], which is a concept borrowed from the field of natural language processing. These attention mechanisms can be easily embedded to different CNN-based architectures.…”
Section: Attention Mechanismmentioning
confidence: 99%
See 1 more Smart Citation
“…Traditional/Deep Models. The classic models for light field SOD often use superpixel-level hand-crafted features [95], [129]- [135], [137], [143]. Early work [129], [135] showed that the unique refocusing capability of light fields can provide useful focusness, depth, and objectness cues, and further proposed several SOD models using light field data.…”
Section: Saliency Detection On Light Field a Light Field Sod Modelsmentioning
confidence: 99%
“…In this paper, we just make use of the camera depth region, [LambdaMin, LambdaMax], to guide the proposed optimization. All the depth parameters are extracted from the public dataset [13] and shared in the supplementary material of our related paper [12].…”
Section: Light Field Capture and Analysismentioning
confidence: 99%