2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) 2020
DOI: 10.1109/fg47880.2020.00037
|View full text |Cite
|
Sign up to set email alerts
|

Spatio-temporal fusion for Macro- and Micro-expression Spotting in Long Video Sequences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 50 publications
(14 citation statements)
references
References 20 publications
0
14
0
Order By: Relevance
“…The state of the art techniques arose from the recent microexpression spotting competition, the Micro-Expression Grand Challenge 2020 [17]. We found that our model is comparable to the existing techniques, being beaten only by the optical flow based technique proposed by Zhang et al [39]. The F 1 scores reported by Pan are averaged scores for both microexpression and macroexpression spotting and thus should not be compared directly with the other methods [23].…”
Section: Microexpressionsmentioning
confidence: 80%
“…The state of the art techniques arose from the recent microexpression spotting competition, the Micro-Expression Grand Challenge 2020 [17]. We found that our model is comparable to the existing techniques, being beaten only by the optical flow based technique proposed by Zhang et al [39]. The F 1 scores reported by Pan are averaged scores for both microexpression and macroexpression spotting and thus should not be compared directly with the other methods [23].…”
Section: Microexpressionsmentioning
confidence: 80%
“…The co-occurrence of macro-and micro-expressions are common in real life. An automatic spotting system for micro-and macro-expressions was designed by [53]. A new spotting benchmark has been proposed recently [118].…”
Section: Micro-expression Spottingmentioning
confidence: 99%
“…Prior to the learning phase, a series of pre-processing steps are introduced to ensure consistency of the data before model learning. Motivated by the work of [19], we take the landmark position of the nose region with five pixels margin to eliminate the global head motion for each frame. Then, we omit the left and right eye regions since optical flow features are highly sensitive to eye blinking [9].…”
Section: Pre-processingmentioning
confidence: 99%