2008
DOI: 10.1117/12.766527
|View full text |Cite
|
Sign up to set email alerts
|

Perceived quality assessment of polygonal meshes using observer studies: a new extended protocol

Abstract: The complexity of a polygonal mesh is usually reduced by applying a simplification method, resulting in a similar mesh having less vertices and faces. Although several such methods have been developed, only a few observer studies are reported comparing the perceived quality of the simplified meshes, and it is not yet clear how the choice of a given method, and the level of simplification achieved, influence the quality of the resulting mesh, as perceived by the final users. Similar issues occur regarding other… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…The results of this work show that automatic measures of fidelity (e.g., Bolin's [29], Metro [5], mean squared error (MSE) approaches) are successful at predicting experimental ratings, less successful at predicting preferences, and largely unsuccessful at predicting naming times. On the other hand, when the task is based on comparing different models, ranking is stated to be better than rating the models because the given ratings do not necessarily reflect the perceptual distance between the compared models [35], [36]. The experimental measures used in several user studies can be found in Table I.…”
Section: A Experimental Measuresmentioning
confidence: 99%
See 4 more Smart Citations
“…The results of this work show that automatic measures of fidelity (e.g., Bolin's [29], Metro [5], mean squared error (MSE) approaches) are successful at predicting experimental ratings, less successful at predicting preferences, and largely unsuccessful at predicting naming times. On the other hand, when the task is based on comparing different models, ranking is stated to be better than rating the models because the given ratings do not necessarily reflect the perceptual distance between the compared models [35], [36]. The experimental measures used in several user studies can be found in Table I.…”
Section: A Experimental Measuresmentioning
confidence: 99%
“…This can be achieved by animating the object or viewpoint as in [21], [35], as well as giving free viewpoint control to the user as in [8], [12], [36]. Furthermore, animations affect the perception of the models such that, in the study of Rogowitz and Rushmeier [35], artifacts caused by simplification are less visible when the objects are rotating rather than standing still.…”
Section: B Experimental Designmentioning
confidence: 99%
See 3 more Smart Citations