2017 International Automatic Control Conference (CACS) 2017
DOI: 10.1109/cacs.2017.8284239
|View full text |Cite
|
Sign up to set email alerts
|

An affective mood booster robot based on emotional processing unit

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…As an effort to improve the accuracy when the information is taken from the wild (as for social robots in service), an emerging strategy consists of considering multimodal or multisource approaches. Thus, a few works have started to adopt multimodal approaches combining several modalities based on the information captured by several robots’ sensors, such as: (i) from Kinect cameras to recognise emotion based on human facial expression and gait, as the study presented in [ 46 ]; (ii) from cameras and the speech system of robots, some studies combine facial and speech [ 47 , 48 , 49 , 50 , 51 , 52 , 53 ] and body gesture and voice [ 5 ] to detect human emotions and accordingly improve HRI or navigation; (iii) from text and speech by converting speech to text to then apply Natural Language Processing (NLP) to recognise emotions, as done in [ 54 ]. However, this topic of robotics is still limited, as the survey presented in [ 55 ] reported.…”
Section: Related Workmentioning
confidence: 99%
“…As an effort to improve the accuracy when the information is taken from the wild (as for social robots in service), an emerging strategy consists of considering multimodal or multisource approaches. Thus, a few works have started to adopt multimodal approaches combining several modalities based on the information captured by several robots’ sensors, such as: (i) from Kinect cameras to recognise emotion based on human facial expression and gait, as the study presented in [ 46 ]; (ii) from cameras and the speech system of robots, some studies combine facial and speech [ 47 , 48 , 49 , 50 , 51 , 52 , 53 ] and body gesture and voice [ 5 ] to detect human emotions and accordingly improve HRI or navigation; (iii) from text and speech by converting speech to text to then apply Natural Language Processing (NLP) to recognise emotions, as done in [ 54 ]. However, this topic of robotics is still limited, as the survey presented in [ 55 ] reported.…”
Section: Related Workmentioning
confidence: 99%
“…Different images can intuitively display human emotions. Traditional image feature extraction methods usually require artificial setting of image features to be extracted, such as color features [3] and texture features [4]. Although such features can accurately express the features of the image, the operation of manual feature screening is very tedious, and multiple features cannot be taken into consideration at one time.…”
Section: Introductionmentioning
confidence: 99%