2015
DOI: 10.1016/j.neunet.2014.09.005
|View full text |Cite
|
Sign up to set email alerts
|

Challenges in representation learning: A report on three machine learning contests

Abstract: The ICML 2013 Workshop on Challenges in Representation Learning(1) focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge. We describe the datasets created for these challenges and summarize the results of the competitions. We provide suggestions for organizers of future challenges and some comments on what kind of knowledge can be gained from machine learning competitions.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

3
398
1
16

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 545 publications
(418 citation statements)
references
References 7 publications
3
398
1
16
Order By: Relevance
“…Intuitively, good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class. The model was developed with the Python library scikit-learn [13] and was trained on a set of different databases such as JAFFE [14], Tarrlab [15], and FER-2013 [16]. These databases were made up of facial expression images.…”
Section: A Methodsmentioning
confidence: 99%
“…Intuitively, good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class. The model was developed with the Python library scikit-learn [13] and was trained on a set of different databases such as JAFFE [14], Tarrlab [15], and FER-2013 [16]. These databases were made up of facial expression images.…”
Section: A Methodsmentioning
confidence: 99%
“…To a great extent, the progress we are currently witnessing in the above face analysis problem is largely attributed to the collection and annotation of "in-thewild" datasets. The contributions of the already developed datasets and benchmarks for analysis of facial expression in the wild have been demonstrated during the challenges in Representation Learning (ICML 2013) [67], in the series of Emotion Recognition in the wild challenges (EmotiW 2013, 2014, 2015 [61,[68][69][70], and 2016 (https://sites.google.com/ site/emotiw2016/)) and in the recently organized workshop on context-based affect recognition (CBAR 2016 (http:// cbar2016.blogspot.gr/)). For a more extended overview on datasets collected in the wild, the reader is referred to [71].…”
Section: Ubiquitous Contextual Informationmentioning
confidence: 99%
“…These include instance-based [9], regression [10], regularization [11], decision tree [12], probabilistic [13], reinforcement learning [14], dimensionality reduction [15], ensemble [16], Bayesian [17], maximum margin [18], evolutionary [19], clustering [9], association rule learning [20], artificial neural network [12,21,22] and deep learning [23] methods (see Figure 1). Regardless of the classification performance, many of these algorithms act as black-boxes, resulting in a poor recognition of the classification structure and robustness owing to the high-dimensionality of the data [24,25]. Recently, ensemble classification methods have received more attention from the machine learning community, resulting in their increased popularity in different applications such as hyperspectral image classification [26][27][28].…”
Section: Introductionmentioning
confidence: 99%
“…We believe DoTRules can be applied more generally to the classification of discrete data such as hyperspectral satellite imagery products.Bayesian [17], maximum margin [18], evolutionary [19], clustering [9], association rule learning [20], artificial neural network [12,21,22] and deep learning [23] methods (see Figure 1). Regardless of the classification performance, many of these algorithms act as black-boxes, resulting in a poor recognition of the classification structure and robustness owing to the high-dimensionality of the data [24,25].…”
mentioning
confidence: 99%