Proceedings of the 13th International Conference on the Foundations of Digital Games 2018
DOI: 10.1145/3235765.3235781
|View full text |Cite
|
Sign up to set email alerts
|

Deep unsupervised multi-view detection of video game stream highlights

Abstract: We consider the problem of automatic highlight-detection in video game streams. Currently, the vast majority of highlight-detection systems for games are triggered by the occurrence of hard-coded game events (e.g., score change, end-game), while most advanced tools and techniques are based on detection of highlights via visual analysis of game footage. We argue that in the context of game streaming, events that may constitute highlights are not only dependent on game footage, but also on social signals that ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(14 citation statements)
references
References 25 publications
0
14
0
Order By: Relevance
“…At the same time, the most common approaches for analysing player experience, besides game and gameplay information, heavily rely on direct measurements from players, such as face monitoring, speech and physiological signals; see e.g. [4], [36], [39]). Unlike these approaches, our methodology relies solely of gameplay video information.…”
Section: Affect Modeling In Gamesmentioning
confidence: 99%
“…At the same time, the most common approaches for analysing player experience, besides game and gameplay information, heavily rely on direct measurements from players, such as face monitoring, speech and physiological signals; see e.g. [4], [36], [39]). Unlike these approaches, our methodology relies solely of gameplay video information.…”
Section: Affect Modeling In Gamesmentioning
confidence: 99%
“…As mentioned in Section 2, there are a number of recent works which have attempted to utilize emotion in addition to video features to generate highlights. Works such as [43,44] produce high quality highlights on football videos and video game streams, but may not perform well on more varied videos, as selected in this work. In addition, [43] requires human input to record emotion, and [44] is designed specifically for streaming setups.…”
Section: Discussionmentioning
confidence: 99%
“…Fiao et al [43] also uses emotions along with video to generate highlights; however, they focus on sports videos, and requires external hardware and human interaction to gather emotion data. Ringer et al [44] used deep learning to measure emotion of a face camera during streaming of video games; this work is restricted to highlight the generation of video games, which similarly to sports, is strongly correlated with audio energy. Kaklauskas et al [45,46] study specifically video ads, and require a plethora of hardware and human interaction to gather various physiological signals.…”
Section: Hybrid Content Techniquesmentioning
confidence: 99%
“…Thus, the labels were revised and a final set of labels was produced consisting of seven labels: "Pickaxing for Gold"; "Exploring"; "Tunneling -Pickaxe", the player uses a pickaxe to get to another area or clear an area for a mine; "Tunneling -Explosive", a player uses an explosive to get to another area or clear an area for a mine; "Explosive -Inner", a (a) (b) Figure 6. Sequence graph showing (a) high-performing team players in colored nodes and (b) players from one high-performing (25,26,27,28,29) and one low-performing (30,31,32) team.…”
Section: Processmentioning
confidence: 99%