2014
DOI: 10.1007/s11042-013-1830-0
|View full text |Cite
|
Sign up to set email alerts
|

Implicit video emotion tagging from audiences’ facial expression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 31 publications
0
10
0
Order By: Relevance
“…Other methods include, a change in emotional intensity before and after watching videos [5], building user personality models [4], video affective content analysis including spontaneous response and emotional descriptors [3] etc. Furthermore, the effect of induced emotions in viewers plays a vital role in affective modeling and plenty of research on automatic machine generated predictions for ''intended elicited emotions in humans'' has although accomplished many milestones using audiovisual features [25], [26], implicit video emotion tagging using audiences' facial expression though Bayesian Network [65], affective image analysis based on perception subjectivity [67]- [70], scientific analysis of speech in phonetics such as in Praat [28], text-based extractions of tones and emotions such as IBM Watson's Tone Analyzer [27], but these techniques have certain drawbacks relative to approaches and techniques such as long-term context-aware information extraction, effect of continuous affective dynamics, reinforcing factors using implicit feedback mechanisms and real-time decision making using contextual information.…”
Section: Related Workmentioning
confidence: 99%
“…Other methods include, a change in emotional intensity before and after watching videos [5], building user personality models [4], video affective content analysis including spontaneous response and emotional descriptors [3] etc. Furthermore, the effect of induced emotions in viewers plays a vital role in affective modeling and plenty of research on automatic machine generated predictions for ''intended elicited emotions in humans'' has although accomplished many milestones using audiovisual features [25], [26], implicit video emotion tagging using audiences' facial expression though Bayesian Network [65], affective image analysis based on perception subjectivity [67]- [70], scientific analysis of speech in phonetics such as in Praat [28], text-based extractions of tones and emotions such as IBM Watson's Tone Analyzer [27], but these techniques have certain drawbacks relative to approaches and techniques such as long-term context-aware information extraction, effect of continuous affective dynamics, reinforcing factors using implicit feedback mechanisms and real-time decision making using contextual information.…”
Section: Related Workmentioning
confidence: 99%
“…Yoo and Cho [21] adopted an interactive genetic algorithm to realize individual scene video retrieval. Wang et al [90] employed a Bayesian network to capture the relationships between the video's common affective tag and the user's specific emotion label, and used the captured relationship to predict the video's personal tag. Soleymani et al [91] tackled the personal affective representation of movie scenes using the video's audiovisual features as well as physiological responses of participants to estimate the arousal and valence degree of scenes.…”
Section: Personalized Video Affective Content Analysismentioning
confidence: 99%
“…However, users' internal feelings and displayed facial behaviors are not always the same, evern though they are related [112], since expressions of emotion vary with person and context. Wang et al [90] employed a Bayesian network to capture the relationship between facial expressions and inner personalized emotions, and thus improved the performance of video emotion tagging.…”
Section: Implicit Video Affective Content Analysis Using Spontaneous mentioning
confidence: 99%
“…This problem refers to the association of tags to multimedia data based on the spontaneous reactions of users while watching the content. This reaction is measured automatically based on their facial expressions [145] or even based on their physiological reactions [65]. The wide availability of built-in sensors within devices capable of multimedia reproduction makes this an interesting and effortless way of tagging content.…”
Section: Assisting Behaviour Understanding Researchmentioning
confidence: 99%