2003
DOI: 10.1117/12.476261
|View full text |Cite
|
Sign up to set email alerts
|

Bridging the semantic gap in sports

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2003
2003
2005
2005

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…Early research focused on automatic shot boundary detection and key frame extraction, using low-level features such as motion and color. More advanced work focused on detection and classification of specific high-level events in particular domains, such as segmentation of stories in news broadcasts [14] or, more recently, detection of plays in sports video programs [15]. Other work includes analysis of documentaries [16] and home movies [17].…”
Section: Automatic Summarization Techniquesmentioning
confidence: 99%
“…Early research focused on automatic shot boundary detection and key frame extraction, using low-level features such as motion and color. More advanced work focused on detection and classification of specific high-level events in particular domains, such as segmentation of stories in news broadcasts [14] or, more recently, detection of plays in sports video programs [15]. Other work includes analysis of documentaries [16] and home movies [17].…”
Section: Automatic Summarization Techniquesmentioning
confidence: 99%
“…Thus, a replay scene can be taken as a significant indication of a highlight [9]. We use the replay detection to localize the highlights in this work.…”
Section: Highlight Detectionmentioning
confidence: 99%
“…It possesses very detailed information about the event, related players/team and approximate event time, and hence some researchers have attempted using web-casting text for sports video analysis. For example, Li et al [93,94,95,96] proposed a generic sports video indexing framework by play data detection, scoreboard OCR, web-casting text analysis and the synchronization of these three information; Xu et al [97] utilized the time-stamp in web-casting text to synchronize the text event and the event from visual/audio analysis. Examples of web-casting text can be found in [98, 99, 100, 101].…”
Section: Web-casting Textmentioning
confidence: 99%
“…Some previous researches have attempted similar work, e.g., Babaguchi et al [51] matched the shot sequence located during the time interval given by text (CC) analysis with example sequence to identify the sports highlight as well as the highlight's boundary. Li et al [95,93] performed scoreboard OCR to generate synchronization points for detected video events and used Dynamic Programming (DP) to find best matching between these synchronization points with successive time-stamps in the web-casting text to obtain event semantics. Their methods are limited as they required the time tag to be accurate and well synchronized with the video.…”
Section: Alignment Of Text Event and Videomentioning
confidence: 99%