2007
DOI: 10.1049/el:20071901
|View full text |Cite
|
Sign up to set email alerts
|

Perceptually optimised coding of open signed video based on gaze prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
8
0

Year Published

2008
2008
2009
2009

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(9 citation statements)
references
References 0 publications
1
8
0
Order By: Relevance
“…Previous work [2] has shown that in open signed material there are two especially important regions -the signer's face and, to a lesser extent, the inset showing the original video.…”
Section: Eye Tracking Results and Analysismentioning
confidence: 99%
See 3 more Smart Citations
“…Previous work [2] has shown that in open signed material there are two especially important regions -the signer's face and, to a lesser extent, the inset showing the original video.…”
Section: Eye Tracking Results and Analysismentioning
confidence: 99%
“…Introducing the cues demonstrated in [2] could also see an improvement in performance. The gaze predictions can be used for a variety of applications, including variable quality coding and error protection.…”
Section: Discussionmentioning
confidence: 97%
See 2 more Smart Citations
“…Due to the omission of any top-down processes, saliency has been shown to be inadequate for such video contexts (including open sign language [11]). Other techniques [8] introduce the idea of a top-down approach, but rely instead on categorizing objects within an image and then associating predetermined probabilities of eye-fixation with each object.…”
Section: Introductionmentioning
confidence: 99%