2013
DOI: 10.1016/j.image.2012.11.008
|View full text |Cite
|
Sign up to set email alerts
|

Video abstraction based on the visual attention model and online clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
25
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(25 citation statements)
references
References 23 publications
0
25
0
Order By: Relevance
“…The value of salient object detection models lies in their applications in many areas such as computer vision, graphics, and robotics. For instance, these models have been successfully applied in many applications such as object detection and recognition [22]- [30], image and video compression [31], [32], video summarization [33]- [35], photo collage/media re-targeting/cropping/thumb-nailing [20], [36], [37], image quality assessment [38]- [41], image segmentation [42]- [45], content-based image retrieval and image collection browsing [46]- [49], image editing and manipulating [50]- [53], visual tracking [54]- [60], object discovery [61], [62], and human-robot interaction [63]- [65].…”
mentioning
confidence: 99%
“…The value of salient object detection models lies in their applications in many areas such as computer vision, graphics, and robotics. For instance, these models have been successfully applied in many applications such as object detection and recognition [22]- [30], image and video compression [31], [32], video summarization [33]- [35], photo collage/media re-targeting/cropping/thumb-nailing [20], [36], [37], image quality assessment [38]- [41], image segmentation [42]- [45], content-based image retrieval and image collection browsing [46]- [49], image editing and manipulating [50]- [53], visual tracking [54]- [60], object discovery [61], [62], and human-robot interaction [63]- [65].…”
mentioning
confidence: 99%
“…Another approach intends to group the frames into visually similar clusters and selects frames relevant to the cluster centers as key frames [17,18]. Zhuang et al [19] group the frames in clusters and selects the key frames from the largest clusters.…”
Section: Related Workmentioning
confidence: 99%
“…This metric was used in [24,62,63,68,69] for 2D video summaries. In 3D key-frame extraction methods, the computational complexity metric is only used by Lee et al in [22], where the computational complexity of Lee's and Huang's et al [18] methods are compared.…”
Section: Comparison Of User Summaries (Cus)mentioning
confidence: 99%