2008 8th IEEE International Conference on Automatic Face &Amp; Gesture Recognition 2008
DOI: 10.1109/afgr.2008.4813347
|View full text |Cite
|
Sign up to set email alerts
|

Acceptability ratings by humans and automatic gesture recognition for variations in sign productions

Abstract: In this study we compare human and machine acceptability judgments for extreme variations in sign productions. We gathered acceptability judgments of 26 signers and scores of three different Automatic Gesture Recognition (AGR) algorithms that could potentially be used for automatic acceptability judgments, in which case the correlation between human ratings and AGR scores may serve as an 'acceptability performance' measure. We found high human-human correlations, high AGR-AGR correlations, but low human-AGR co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…For example, using people's behavior and intuitions to benchmark models and systems is argued for in [1] who used human ratings of the acceptability of signing variability, and the discrepancies of these ratings from computer ratings, to benchmark the representability of test data and test systems. Of course, even though the study was designed to draw on what visual primitives and low-level strategies people may be tapping into in order to interpret the transitive action gestures presented, it is never certain that these are the actual strategies and that people are not availing of other unspoken but crucial inferencing.…”
Section: Limitations and Future Directionsmentioning
confidence: 99%
“…For example, using people's behavior and intuitions to benchmark models and systems is argued for in [1] who used human ratings of the acceptability of signing variability, and the discrepancies of these ratings from computer ratings, to benchmark the representability of test data and test systems. Of course, even though the study was designed to draw on what visual primitives and low-level strategies people may be tapping into in order to interpret the transitive action gestures presented, it is never certain that these are the actual strategies and that people are not availing of other unspoken but crucial inferencing.…”
Section: Limitations and Future Directionsmentioning
confidence: 99%
“…The correctness values given to these sign variants by an algorithm were regarded as machine acceptability ratings. The variants were ranked according to these ratings, and the resulting rankings were compared to the rankings human signers had made in experiment 5 (see 23 for more details). Somewhat surprisingly, the rank correlation (Kendall's tau) between the machine and human rankings was low (average 0.30).…”
mentioning
confidence: 99%