2018
DOI: 10.1111/cgf.13353
|View full text |Cite
|
Sign up to set email alerts
|

Visual Attention for Rendered 3D Shapes

Abstract: Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
66
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(68 citation statements)
references
References 35 publications
2
66
0
Order By: Relevance
“…According to their recommendations, we selected the Pearson's Linear Correlation Coefficient (CC), computed as CC(H, T ) = cov(H, T ) σ H σ T (12) where H and T are the saliency maps produced by a competing method and the ground truth respectively. We also selected the area under the ROC curve (AUC) suggested in [6]. The ground truth maps are thresholded to be converted into binary maps (in our experiments we threshold to obtain M vertices considered as salient vertices and M is equal to the number of human-selected Schelling points on the 3D object).…”
Section: Quantitative Saliency Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…According to their recommendations, we selected the Pearson's Linear Correlation Coefficient (CC), computed as CC(H, T ) = cov(H, T ) σ H σ T (12) where H and T are the saliency maps produced by a competing method and the ground truth respectively. We also selected the area under the ROC curve (AUC) suggested in [6]. The ground truth maps are thresholded to be converted into binary maps (in our experiments we threshold to obtain M vertices considered as salient vertices and M is equal to the number of human-selected Schelling points on the 3D object).…”
Section: Quantitative Saliency Resultsmentioning
confidence: 99%
“…Another reason for choosing the Schelling dataset is that it is much larger than existing eye fixation datasets. For example, the dataset proposed in [6] contains 32 meshes and the one proposed in [18] merely contains 5 meshes. As one of the aims is to demonstrate the generalisation capability of our approach, experiments on a larger dataset are more desired.…”
Section: Datasets and Ground Truth Generationmentioning
confidence: 99%
See 2 more Smart Citations
“…As a matter of fact, the viewing information is not included as a parameter neither in the annotated datasets (used to train models) such as MIT300 [8], CAT 2000 [9] that were established for the MIT saliency Benchmark (saliency.mit.edu) nor in the computational models themselves. -Since the few existing 3D models consider geometry information only without texture or shading [10], applying them in an immersive environment is very restricted (because of the lack of texture for example). On the other hand, several promising 2D models [11] that showed high performances could be applied in the immersive context by considering 2D projection views of 3D data, rendered by a specific rule.…”
Section: Introduction and Problem Statementmentioning
confidence: 99%