2018
DOI: 10.1002/asi.24103
|View full text |Cite
|
Sign up to set email alerts
|

Video summarization using event‐related potential responses to shot boundaries in real‐time video watching

Abstract: Our aim was to develop an event‐related potential (ERP)‐based method to construct a video skim consisting of key shots to bridge the semantic gap between the topic inferred from a whole video and that from its summary. Mayer's cognitive model was examined, wherein the topic integration process of a user evoked by a visual stimulus can be associated with long‐latency ERP components. We determined that long‐latency ERP components are suitable for measuring a user's neuronal response through a literature review. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 43 publications
(62 reference statements)
0
0
0
Order By: Relevance
“…This outcome into a precision of 62.6% which is higher when segregated and VSUMM (54.4%), lacking word reference methodologies (SD) (48.3%), and Key Point-Based Key-frame Selection (KBKS) (46%). Comparative models are depicted in the studies [13][14][15][16], wherein Event-Related Potential Responses, network video overview utilizing super pixels, Joint Integer Linear Programming (JILP), and cautious video summary improvements are portrayed. Out of these, the JILP approach beats different procedures, by giving an exactness of 59.9% across different datasets.…”
Section: Literature Review Deep Learning-based Video Summarizationmentioning
confidence: 99%
“…This outcome into a precision of 62.6% which is higher when segregated and VSUMM (54.4%), lacking word reference methodologies (SD) (48.3%), and Key Point-Based Key-frame Selection (KBKS) (46%). Comparative models are depicted in the studies [13][14][15][16], wherein Event-Related Potential Responses, network video overview utilizing super pixels, Joint Integer Linear Programming (JILP), and cautious video summary improvements are portrayed. Out of these, the JILP approach beats different procedures, by giving an exactness of 59.9% across different datasets.…”
Section: Literature Review Deep Learning-based Video Summarizationmentioning
confidence: 99%