2014 IEEE International Conference on Image Processing (ICIP) 2014
DOI: 10.1109/icip.2014.7025220
|View full text |Cite
|
Sign up to set email alerts
|

Semiautomatic visual-attention modeling and its application to video compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
33
0
1

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 37 publications
(34 citation statements)
references
References 22 publications
0
33
0
1
Order By: Relevance
“…Our experiments used the SAVAM [7] video-saliency dataset, which contains 45 videos that are each 12-18 seconds long. The original models provided by the developers of OM-CNN, ACL and SAM were trained on others datasets: SAM used SALICON [24]; ACL used SALICON, DHF1K [14], Hollywood-2 [25] and UCF sports [25]; and OM-CNN used LEDOV [13].…”
Section: Saliency Models Preparationmentioning
confidence: 99%
See 1 more Smart Citation
“…Our experiments used the SAVAM [7] video-saliency dataset, which contains 45 videos that are each 12-18 seconds long. The original models provided by the developers of OM-CNN, ACL and SAM were trained on others datasets: SAM used SALICON [24]; ACL used SALICON, DHF1K [14], Hollywood-2 [25] and UCF sports [25]; and OM-CNN used LEDOV [13].…”
Section: Saliency Models Preparationmentioning
confidence: 99%
“…The authors of [3] showed that the performance of non-deep models can improve considerably through application of simple Table 1: Objective evaluation results of the selected models and the baselines using five metrics on the SAVAM [7] dataset. The results include the original models, the finetuned models for the training part of the SAVAM dataset (FT), and the result after applying the postprocessing transformations (PP).…”
Section: Saliency Models Preparationmentioning
confidence: 99%
“…Examples of databases include: VQEG FRTV Phase I and HDTV that contains SD and HD television sequences [46], [47]; LIVE database analysed MPEG-2 and H.264 compression, and simulated transmissions over IP wired and wireless networks [48]; LIVE Mobile database contains H.264 compressed videos with distortions such as packet loss, frame freeze, and rate adaptation, that were evaluated on smartphone and tablet mobile devices [49], [50]; ECVQ and EVVQ databases contain MPEG-4 and H.264 compressed videos at CIF and VGA resolution [51], [52]; MMSP SVD database contains sequences encoded using scalable video coding [53]; SAVAM database contains eye-tracking data for video sequences at HD and UHD resolution [54].…”
Section: Public Vqa Databasesmentioning
confidence: 99%
“…(a) (b) saliency 맵을 쉽게 구하는 방법이 소개되고 있다 [5]. 또한 crowd sourcing을 사용하여 적은 비용으로 짧 은 시간에 saliency 맵을 구하는 연구도 새롭게 진행 되고 있다 [6].…”
unclassified