2012
DOI: 10.1109/tip.2011.2165292
|View full text |Cite
|
Sign up to set email alerts
|

Eye-Tracking Database for a Set of Standard Video Sequences

Abstract: This correspondence describes a publicly available database of eye-tracking data, collected on a set of standard video sequences that are frequently used in video compression, processing, and transmission simulations. A unique feature of this database is that it contains eye-tracking data for both the first and second viewings of the sequence. We have made available the uncompressed video sequences and the raw eye-tracking data for each sequence, along with different visualizations of the data and a preliminar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 94 publications
(63 citation statements)
references
References 8 publications
0
63
0
Order By: Relevance
“…We calculate both the mean PSNR and mean EWPSNR values of all the reconstructed NK frames at various sampling rates. To compute the EWPSNR values, we exploit the first viewing eye-tracking data provided in [15]. Table 1 and Table 2 depict both the mean PSNR and mean EWPSNR results of all the considered DCVS codecs with a short GOP (i.e.,GOP=2) and a long GOP (i.e.,GOP=8), respectively.…”
Section: Resultsmentioning
confidence: 99%
“…We calculate both the mean PSNR and mean EWPSNR values of all the reconstructed NK frames at various sampling rates. To compute the EWPSNR values, we exploit the first viewing eye-tracking data provided in [15]. Table 1 and Table 2 depict both the mean PSNR and mean EWPSNR results of all the considered DCVS codecs with a short GOP (i.e.,GOP=2) and a long GOP (i.e.,GOP=8), respectively.…”
Section: Resultsmentioning
confidence: 99%
“…The results in [28] showed that gaze locations in the first and second viewings can differ notably, however they remain relatively close to each other when there is a single dominant salient region in the scene (for example, the face in the Foreman sequence.) As a result, it is reasonable to expect that good saliency models will produce high scores for those frames where the first and second viewing data agree.…”
Section: B Eye-tracking Video Datasetsmentioning
confidence: 90%
“…1) The SFU Dataset: The SFU eye-tracking dataset [28] The first viewing is used as ground truth for evaluating the performance of saliency models, whereas the data from the second viewing is used to construct benchmark models, as described in Section II-C. The results in [28] showed that gaze locations in the first and second viewings can differ notably, however they remain relatively close to each other when there is a single dominant salient region in the scene (for example, the face in the Foreman sequence.)…”
Section: B Eye-tracking Video Datasetsmentioning
confidence: 99%
“…However, it can be very complex process for multiple scenes of a video and bitrate can be very high to encode multiple scenes of a video. In [18], HEVC Coding presented based on Fast Intra Prediction to reduce the computational complexity and bitrate while video encoding. in this paper, Screen content coding (SCC) used to provide low latency and fast transmission with HEVC technique.…”
Section: Video Encoding Issuesmentioning
confidence: 99%
“…This technique can used in field of medical, photography, satellite imaging, HDTV, stereoscopic video processing, face recognition and video coding or encoding to estimate saliency and compress high-definition videos largely. Our video compression provides fast computation for large training database such as SFU dataset [18] and HEVC video_database [19]. There are multiple factors, which makes HEVC architecture efficient and helps in enhancing the performance of the system.…”
Section: Video Encoding Based On Saliency Featuresmentioning
confidence: 99%