2005
DOI: 10.1109/tmm.2005.858397
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive extraction of highlights from a sport video based on excitement modeling

Abstract: Abstract-This paper addresses the challenge of automatically extracting the highlights from sports TV broadcasts. In particular, we are interested in finding a generic method of highlights extraction, which does not require the development of models for the events that are thought to be interpreted by the users as highlights. Instead, we search for highlights in those video segments that are expected to excite the users most. It is namely realistic to assume that a highlighting event induces a steady increase … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
114
1

Year Published

2009
2009
2019
2019

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 119 publications
(115 citation statements)
references
References 22 publications
0
114
1
Order By: Relevance
“…For example, Shao et al [52] summarise music videos automatically by using an adaptive clustering algorithm, and music domain knowledge to analyse content in the music track, while detecting and clustering shots in the video track. In contrast, motion activity, cut density and sound energy have been used to produce content-based excitement curves [30] and affect curves that probabilistically infer how the user's affective state might be changed by the video content [31]. Coicca and Schettini [12] analyse image features, in terms of the level of difference between two consecutive frames, resulting in a frame by frame measure of visual complexity.…”
Section: Video Summarisationmentioning
confidence: 99%
“…For example, Shao et al [52] summarise music videos automatically by using an adaptive clustering algorithm, and music domain knowledge to analyse content in the music track, while detecting and clustering shots in the video track. In contrast, motion activity, cut density and sound energy have been used to produce content-based excitement curves [30] and affect curves that probabilistically infer how the user's affective state might be changed by the video content [31]. Coicca and Schettini [12] analyse image features, in terms of the level of difference between two consecutive frames, resulting in a frame by frame measure of visual complexity.…”
Section: Video Summarisationmentioning
confidence: 99%
“…This methodology is relatively new in sports video analysis [3]. Ma et al [10] employ a series of psychological models on pre-attention, i.e.…”
Section: Related Workmentioning
confidence: 99%
“…With the increase of feature number, noise overwhelms actual attention peaks and thus fails highlight detection. Hanjalic et al [3] carefully choose three features to estimate the intensity of viewer reflection, including block motion vector, shot cut density and audio energy. The authors furthermore employ a 1-minute long low-pass Kaiser window filter to smooth these features as well as enhance the signal noise rate (SNR) of feature related attention [4].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to improve the robustness of event detection, multimodality based approaches were employed for semantic extraction in sports video. For example, audio/visual features were utilized for highlight extraction [6], [7], and audio/visual/textual features were utilized for event detection [5], [9].…”
Section: ) Event Extraction Based On Video Content Onlymentioning
confidence: 99%