2013
DOI: 10.1109/tbc.2013.2265782
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Attention-Based Key-Frame Determination Method

Abstract: This paper presents a novel key-frame detection method that combines the visual saliency-based attention features with the contextual game status information for sports videos. Two critical issues of the attention-based video content analysis are addressed: 1) the visual attention characteristics when a user is watching a video clip and 2) extracting the degree of excitement about the on-going game status. First, the objectoriented visual attention map and the algorithm of determining the contextual attention … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Similartothepreviousapproaches,LaiandYi(2012)computedstaticattentionvaluesusingcolor contrastandtexturecontrastwheredynamicattentionscoresarecomputedusingmotionintensityand motionorientationconsistency.Thesetwoattentionvaluesarefusedtogenerateanattentioncurve wherethehighlysalientkeyframesareextracted. Shih(2013)usedfacialandcenterbiasattentioncues alongwiththespatialandtemporalattentionfeatures. Salehin,andPaul(2015)exploitedforeground information,motionandvisualsaliencytoextractthemovingobjectsforsummarizationofsurveillance video.TheauthorsexploitedtheGraphbasedVisualSaliencyapproach(GBVS) (Hareletal.,2007) toestimatesalientregionsinsurveillancevideos.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Similartothepreviousapproaches,LaiandYi(2012)computedstaticattentionvaluesusingcolor contrastandtexturecontrastwheredynamicattentionscoresarecomputedusingmotionintensityand motionorientationconsistency.Thesetwoattentionvaluesarefusedtogenerateanattentioncurve wherethehighlysalientkeyframesareextracted. Shih(2013)usedfacialandcenterbiasattentioncues alongwiththespatialandtemporalattentionfeatures. Salehin,andPaul(2015)exploitedforeground information,motionandvisualsaliencytoextractthemovingobjectsforsummarizationofsurveillance video.TheauthorsexploitedtheGraphbasedVisualSaliencyapproach(GBVS) (Hareletal.,2007) toestimatesalientregionsinsurveillancevideos.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to color contrast and color distribution cues, another widely used prior called center prior (Wangetal.,2011;Shih,2013)isalsoincorporated.Accordingtothecharacteristicsofhuman perceptualsystem,theregionclosertothecenterofthescreenwillattractthemoreattention (Shih, 2013).Sincesalientobjectsareplacedneartheimagecentermostofthetime,thecenterpriorgives moreweighttotheregionsthatarenearertheimagecenterthantheregionsneartheimageboundaries. AcenterpriorcueisgeneratedusingaGaussianfunctionbasedonapatch'sdistancefromthecenter ofvideoframe.Thecenterpriorweightforapatchiscomputedas:…”
Section: Center Bias Integrationmentioning
confidence: 99%
“…By solving a specifically designed objective function under selection criteria, the optimal data points are selected as key frames. For instance, Shih [23] combines the visual saliency-based attention features with the contextual game status information of sports videos and selects the key frames with the maximal visual attention scores. Liu et al [24] model the summarization procedure as a maximum a posterior problem by integrating both shot boundary detection and key frame selection.…”
Section: Related Workmentioning
confidence: 99%
“…For the gastroscopic video, clinicians tend to focus on the abnormal lesions, which reflect the human psychological preferences [23], i.e., attention. Therefore, we take both gaze and content change information as the attention prior knowledge to reduce the gap between low-level features and high-level concepts, aiming to make the result of summarization fit the clinicians' actual preferences better.…”
Section: Attention Priormentioning
confidence: 99%
See 1 more Smart Citation