2022
DOI: 10.2139/ssrn.4143334
|View full text |Cite
|
Sign up to set email alerts
|

A Multimodal Model for Predicting Feedback Position and Type During Conversation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 59 publications
0
1
0
Order By: Relevance
“…The smile interval labels and boundaries are predicted from the intensities of the facial Action Units outputted by OpenFace. For the present study the SMAD output is finally transformed in a 3 level smiles scale (NF, LI, HI ) with NF encoding Neutral Face, LI Low Intensity smiles (smiles with mouth closed) and HI High Intensity smiles (smiles with mouth opened), as proposed in [14].…”
Section: Automatic Annotationmentioning
confidence: 99%
“…The smile interval labels and boundaries are predicted from the intensities of the facial Action Units outputted by OpenFace. For the present study the SMAD output is finally transformed in a 3 level smiles scale (NF, LI, HI ) with NF encoding Neutral Face, LI Low Intensity smiles (smiles with mouth closed) and HI High Intensity smiles (smiles with mouth opened), as proposed in [14].…”
Section: Automatic Annotationmentioning
confidence: 99%