Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376808
|View full text |Cite
|
Sign up to set email alerts
|

RCEA: Real-time, Continuous Emotion Annotation for Collecting Precise Mobile Video Ground Truth Labels

Abstract: Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 35 publications
(46 citation statements)
references
References 92 publications
0
45
1
Order By: Relevance
“…As shown in Figure 10 , more than of samples from CASE and of samples from MERCA belong to the neutral class. The resulting high amounts of neutral V-A ratings cannot be attributed to the mobile aspect of MERCA’s data collection, given that users spent most of their time (up to 73.2%) standing while watching and annotating [ 32 ]. We instead attribute this phenomenon to the act of annotating continuously, irrespective of environment (static vs. mobile).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…As shown in Figure 10 , more than of samples from CASE and of samples from MERCA belong to the neutral class. The resulting high amounts of neutral V-A ratings cannot be attributed to the mobile aspect of MERCA’s data collection, given that users spent most of their time (up to 73.2%) standing while watching and annotating [ 32 ]. We instead attribute this phenomenon to the act of annotating continuously, irrespective of environment (static vs. mobile).…”
Section: Discussionmentioning
confidence: 99%
“…To verify the validity of CorrNet using wearable physiological sensors, we collected continuous self-annotated physiological signals. Here, users annotated their valence and arousal levels using a continuous mobile annotation technique (cf., [ 32 ]) in a controlled, outdoor environment. This data collection resulted in the Mobile Emotion Recognition with Continuous Annotation (MERCA) dataset, which we describe below in Section 4.2 .…”
Section: Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…For inputting annotations continuously, prior research use either joystick-based controllers (e.g., DARMA [10] or CASE [32]), or a physical radial controller if specifying a single, continuous dimension such as emotional intensity (e.g., RankTrace [20]). Recently, Zhang et al [38] proposed RCEA, which is suitable for mobile touchscreens and mobile video watching scenarios. Given that in our case users will be wearing an HMD, we need to enable easy controller-based input that can be used while users' visual attention is occupied by the 360 • video content.…”
Section: Annotating Emotions Continuouslymentioning
confidence: 99%