Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413632
|View full text |Cite
|
Sign up to set email alerts
|

Performance over Random

Abstract: This paper proposes a new evaluation approach for video summarization algorithms. We start by studying the currently established evaluation protocol; this protocol, defined over the ground-truth annotations of the SumMe and TVSum datasets, quantifies the agreement between the user-defined and the automatically-created summaries with F-Score, and reports the average performance on a few different training/testing splits of the used dataset. We evaluate five publicly-available summarization algorithms under a la… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…Feature-based: Motion, color, dynamic contents, gesture, audio-visual, voice transcripts, objects, and other factors are used to classify feature-based video summarization techniques. Apostolidis et al (2021) used relevant literature on deep-learning-based video summarization and covered protocol aspects for summary evaluation.…”
Section: Semantic-basedmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature-based: Motion, color, dynamic contents, gesture, audio-visual, voice transcripts, objects, and other factors are used to classify feature-based video summarization techniques. Apostolidis et al (2021) used relevant literature on deep-learning-based video summarization and covered protocol aspects for summary evaluation.…”
Section: Semantic-basedmentioning
confidence: 99%
“…For example, a model with too few features may be inaccurate, whereas a model with too many features may be overfitted (Gygli et al, 2014). The following are some broad categories for a deep-learning-based video summarizing algorithms: Supervised approaches, Unsupervised approaches, and Semi-supervised approaches (Apostolidis et al, 2021). The summary should keep keyframes from the original video.…”
Section: Training Strategy Basedmentioning
confidence: 99%
“…In the case of Youtube, we excluded 10 cartoon videos, since the utilized networks for feature extraction (the GoogleNet of [24]) and aesthetic quality estimation (the FCN architecture of [3]) cannot provide meaningful representations and aesthetics measurements for the content of these videos. Finally, driven by the recent reportings in [2] about the varying difficulty of the different randomly-created splits of data of other relevant datasets (that are extensively used for evaluating video summarization algorithms), to reduce the impact of the utilized data split for training and testing our method we run our experiments on 10 different randomly-created splits and in the following we report the average performance over these runs.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…In the most challenging scenario where only one thumbnail is selected and compared with the 3-thumbnails ground-truth (P@1), the proposed method is by far more competitive than the considered baseline, showing a performance increase by approximately 100% compared to random selection. In addition our method seems to be more effectively-tailored to the video thumbnail selection task compared to the SoA video summarization algorithm from [1], that was evaluated under the same experimental conditions using its publicly-available implementation 2 .…”
Section: Performance Comparisonsmentioning
confidence: 99%
“…F-score to assess the similarity between selected and annotated frames or segments by calculating the harmonic mean of precision and recall, where precision represents the proportion of relevant frames correctly included in the summary, and Otani et al [87] proposed the rank order statistics, namely Kendall 𝜏 and Spearman 𝜌 coefficients, metrics to enhance video summary evaluation by considering importance scores. In contrast, Apostolidis et al [88] introduced an alternative evaluation method called Performance over Random (PoR) to address limitations in existing evaluation protocols by considering the complexity of each used data split.…”
Section: Video Summarization Evaluation Metricmentioning
confidence: 99%