2012
DOI: 10.1007/s10994-012-5306-7
|View full text |Cite
|
Sign up to set email alerts
|

Sequential approaches for learning datum-wise sparse representations

Abstract: In supervised classification, data representation is usually considered at the dataset level: one looks for the "best" representation of data assuming it to be the same for all the data in the data space. We propose a different approach where the representations used for classification are tailored to each datum in the data space. One immediate goal is to obtain sparse datum-wise representations: our approach learns to build a representation specific to each datum that contains only a small subset of the featu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
20
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 14 publications
(21 citation statements)
references
References 23 publications
1
20
0
Order By: Relevance
“…This phenomenon is due to the presence of expensive features that clearly bring relevant information. A similar behaviour is observed with GreedyMiser and with B-REAM, but the latter seems more agile and able to better benefit from relevant expensive features 7 . We suppose that this is due to the use of reinforcement-learning in- Fig.…”
Section: Methodssupporting
confidence: 71%
See 2 more Smart Citations
“…This phenomenon is due to the presence of expensive features that clearly bring relevant information. A similar behaviour is observed with GreedyMiser and with B-REAM, but the latter seems more agile and able to better benefit from relevant expensive features 7 . We suppose that this is due to the use of reinforcement-learning in- Fig.…”
Section: Methodssupporting
confidence: 71%
“…∇ γ,θ fγ,i(zt).ci (7) with a t sampled w.r.t. f γ (z t ), and f γ,i is the i-th component of the output of f γ .…”
Section: Representing Partially Acquired Datamentioning
confidence: 99%
See 1 more Smart Citation
“…In [5] the authors extend the MDP proposed for sequential text classification to deal with any other type of data. The formulation is almost the same as in [4], although this time the MDP can decide what feature to sample from the instance under analysis (i.e., there is no sequential input).…”
Section: Early Text Classificationmentioning
confidence: 99%
“…This is not a surprising result if we notice that this problem is highly imbalanced: the imbalance ratio for training and test partitions is of 12.1 and 9.56, respectively. Furthermore, the reduction of the vocabulary may affect significantly this particular domain (the jargon used in chat conversations 5 Police officers acted as children, predators are real.…”
Section: Sexual Predator Detectionmentioning
confidence: 99%