2016
DOI: 10.48550/arxiv.1608.01041
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 23 publications
1
6
0
Order By: Relevance
“…The results show a similar behavior as the one previously shown for Affectnet. The main difference is a higher general accuracy (top accuracy 83.2%), consistent with the results of [Barsoum et al, 2016] for this dataset.…”
Section: A3 Model Bias Resultssupporting
confidence: 87%
See 2 more Smart Citations
“…The results show a similar behavior as the one previously shown for Affectnet. The main difference is a higher general accuracy (top accuracy 83.2%), consistent with the results of [Barsoum et al, 2016] for this dataset.…”
Section: A3 Model Bias Resultssupporting
confidence: 87%
“…The final images have 425 by 425 pixels of resolution. Appendix A repeats the same analysis for a second dataset, FER+ [Barsoum et al, 2016].…”
Section: Datasetmentioning
confidence: 97%
See 1 more Smart Citation
“…To allow conversion of a model from .onnx format to .wasm format, the ONNX variant should adhere to the following constraints: ONNX version 1.7.0, opset 12, file-format 7 5 . The Scailable python package 6 provides a validation function to aid in the conversion process. For the sake of replicability, all subjects are already pre-trained and we provide the same inputs to each of them.…”
Section: Subjects Selectionmentioning
confidence: 99%
“…Even though assessing the accuracy of the models is out of scope of this study, the interested reader can inspect their output in our replication package. MNIST [20] Image Classification MNIST 70k/30k Emotion [6] Image Classification Emotion FER 4k/1.8k CIFAR10 [18] Image Classification CIFAR-10 4k/2k YOLOv4 [7] Object Detection COCO 700/300…”
Section: Subjects Selectionmentioning
confidence: 99%