2021
DOI: 10.3390/s21217176
|View full text |Cite
|
Sign up to set email alerts
|

A High-Density EEG Study Investigating VR Film Editing and Cognitive Event Segmentation Theory

Abstract: This paper introduces a cognitive psychological experiment that was conducted to analyze how traditional film editing methods and the application of cognitive event segmentation theory perform in virtual reality (VR). Thirty volunteers were recruited and asked to watch a series of short VR videos designed in three dimensions: time, action (characters), and space. Electroencephalograms (EEG) were recorded simultaneously during their participation. Subjective results show that any of the editing methods used wou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 44 publications
0
11
0
Order By: Relevance
“…Event segmentation theory postulates that what we perceive as continuous experience is composed of discrete events, rapidly changing by context, new scenes, discussion topics, etc. This theory has been verified by fMRI and EEG experiments that discover active brain subnetworks and transitions between them when people listen to stories or watch videos (Speer et al 2009;Zacks et al 2010;Tian et al 2021).…”
Section: Dynamics: Brain States and In The Mind Spacesmentioning
confidence: 74%
“…Event segmentation theory postulates that what we perceive as continuous experience is composed of discrete events, rapidly changing by context, new scenes, discussion topics, etc. This theory has been verified by fMRI and EEG experiments that discover active brain subnetworks and transitions between them when people listen to stories or watch videos (Speer et al 2009;Zacks et al 2010;Tian et al 2021).…”
Section: Dynamics: Brain States and In The Mind Spacesmentioning
confidence: 74%
“…Simple visual feature detection is possible in the occipital visual cortex, while complex feature motion discrimination is mostly performed in the temporal lobe [ 28 , 29 ]. The greater levels of sensory processing, linguistic abilities, and spatial awareness are all attributed to the parietal areas [ 30 ]. According to research, the frontal area is the focus of higher cognitive functions.…”
Section: Discussionmentioning
confidence: 99%
“…The effect of edit types on load was not statistically different in scores on the total scale but showed a trend of higher scores for the cross-axis editing than the continuity editing clips, which may be due to the small size of our statistics and sample size that could be increased in future work to inform further research on the effect of editing methods on viewing load. For IPQ scale results, Tian et al [31] demonstrated that there is a difference in total IPQ scores between editing and no editing, but no statistical difference in IPQ scores between different editing groups. Our work further discussed the differences in IPQ total and subscale scores by edit type, and the results showed no statistical differences between edit type on both total and subscale scores; therefore, we hypothesized that edit type is not a major factor affecting participants' subjective immersion.…”
Section: Subjective Ratingmentioning
confidence: 99%
“…VR environments result in greater early visual attention and higher α power than 2D films [30]. Tian et al [31] studied VR film editing and cognitive event segmentation theory using high-density EEGs, concluding that spatial changes had the greatest impact on viewers and that editing could help better understand the footage. Cao et al [32] conducted a preliminary exploration of the implementation of montage editing techniques in VR films, arguing that the VR portal effect may benefit the viewer's cognitive load compared to clipping and fading, but may negatively affect recall performance.…”
Section: Introductionmentioning
confidence: 99%