Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems 2017
DOI: 10.1145/3027063.3053211
|View full text |Cite
|
Sign up to set email alerts
|

In360

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…To test the consistency of fused continuous V-A ratings, essentially how effective they are, we implement a temporal analysis of each video annotation result. Suppose A i j is the fused arousal value of video i, i (1)(2)(3)(4)(5) or high [5][6][7][8][9] arousal value (cf., [114]), the overall predicted (i.e., classified) arousal for video i equals to the corresponding low/high label. The predicted valence for all eight videos are similarly calculated.…”
Section: Analysis: Viewport-dependent Fused Emotion Annotationsmentioning
confidence: 99%
See 1 more Smart Citation
“…To test the consistency of fused continuous V-A ratings, essentially how effective they are, we implement a temporal analysis of each video annotation result. Suppose A i j is the fused arousal value of video i, i (1)(2)(3)(4)(5) or high [5][6][7][8][9] arousal value (cf., [114]), the overall predicted (i.e., classified) arousal for video i equals to the corresponding low/high label. The predicted valence for all eight videos are similarly calculated.…”
Section: Analysis: Viewport-dependent Fused Emotion Annotationsmentioning
confidence: 99%
“…Within such experiences, several works have established that such immersive VR environments have the capacity to evoke a wide range of emotions in humans [28,31,76,78], and through sensing of physiological and behavioral markers (e.g., brain and heartbeat dynamics), can enable automatic emotion recognition of valence and arousal during such experiences [68]. Whether the goal is to induce, track, or recognize emotion for educational purposes [1], embodied virtual tourism [7], news engagement [104,106], or develop emotion recognition and adaptive systems [68] within immersive VR experiencess, it is important to collect accurate and precise ground truth emotion labels. However, collecting emotional responses to 360 • VR videos can be time consuming, demand considerable cognitive effort and interpretation [103], or carried out outside the VR experience (cf., [18,76]) which may break the sense of immersion and presence [54,87].…”
Section: Introductionmentioning
confidence: 99%
“…These aspects can explain the increasing use of 360 °video by researchers and facilitators in education and training domains over the last 10 years (Reyna Zeballos, 2018). Currently, 360 °video are used in a wide range of domain, with students to change their preconceived notions on their career (Assilmia et al, 2017), or to create virtual field trips to integrate in future classrooms for PSTs (Huh, 2020), in medical education (Ulrich et al, 2019), sports training (basketball players; Panchuk et al, 2018, officials;Kittel et al, 2020a), or in water-safety skills to children (Araiza-Alba et al, 2021). The research field of 360 °video uses in TE is relatively recent and Reyna Zeballos, (2018) highlight that research in the field is not yet robust.…”
Section: Introductionmentioning
confidence: 99%