2018
DOI: 10.11591/ijece.v8i5.pp4042-4046
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal Asian Conversation Mobile Video Dataset for Recognition Task

Abstract: Images, audio, and videos have been used by researchers for a long time to develop several tasks regarding human facial recognition and emotion detection. Most of the available datasets usually focus on either static expression, a short video of changing emotion from neutral to peak emotion, or difference in sounds to detect the current emotion of a person. Moreover, the common datasets were collected and processed in the United States (US) or Europe, and only several datasets were originated from Asia. In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…The SEMAINE Dataset has 24 interaction sessions with a total of 95 character interactions and 190 video clips [24]. Some datasets in the facial expressions recognition area also were collected with Asian respondents, for example, The Japanese Female Facial Expression (JAFFE) [25], Multimodal Asian Conversation Dataset [26], and the Indonesian Mixed emotion Dataset (IMED) [27]. The Japanese Female Facial Expression (JAFFE) [25] provides 7 classification of emotions (six basic emotions and neutral) from 213 images of 10 subjects.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…The SEMAINE Dataset has 24 interaction sessions with a total of 95 character interactions and 190 video clips [24]. Some datasets in the facial expressions recognition area also were collected with Asian respondents, for example, The Japanese Female Facial Expression (JAFFE) [25], Multimodal Asian Conversation Dataset [26], and the Indonesian Mixed emotion Dataset (IMED) [27]. The Japanese Female Facial Expression (JAFFE) [25] provides 7 classification of emotions (six basic emotions and neutral) from 213 images of 10 subjects.…”
Section: Datasetsmentioning
confidence: 99%
“…The Japanese Female Facial Expression (JAFFE) [25] provides 7 classification of emotions (six basic emotions and neutral) from 213 images of 10 subjects. The Multimodal Asian Conversation Dataset [26] provides seven classifications of emotions (six basic emotions and neutral) from more than 100 minutes of videos of 5 subjects. Finally, the Indonesian Mixed emotion Dataset (IMED) [27] consists of 570 videos and 66,819 Images categorised into seven single emotions (Anger, Disgust, Fear, Happy, Sadness, Surprise and Neutral) and twelve mixed emotions [27].…”
Section: Datasetsmentioning
confidence: 99%
“…The SEMAINE Dataset has 24 interaction sessions with a total of 95 character interactions and 190 video clips [19]. Some dataset in the facial expressions recognition area also were collected with Asian respondents, for example, JAFFE [20], Multimodal Asian Conversation Dataset [21], and the Indonesian Mixed emotion Dataset (IMED) [22]. The Japanese Female Facial Expression (JAFFE) [20] provides 7 classification of emotions (six basic emotions and neutral) from 213 images of 10 subjects.…”
Section: Datasetsmentioning
confidence: 99%
“…The Japanese Female Facial Expression (JAFFE) [20] provides 7 classification of emotions (six basic emotions and neutral) from 213 images of 10 subjects. The Multimodal Asian Conversation Dataset [21] provides seven classifications of emotions (six basic emotions and neutral) from more than 100 minutes of videos of 5 subjects. Finally, the Indonesian Mixed emotion Dataset (IMED) [22] consists of 570 videos and 66,819 Images categorised into seven single emotions (Anger, Disgust, Fear, Happy, Sadness, Surprise and Neutral) and twelve mixed emotions [22].…”
Section: Datasetsmentioning
confidence: 99%