2017
DOI: 10.1007/s13735-017-0126-y
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised group feature selection for media classification

Abstract: The selection of an appropriate feature set is crucial for the efficient analysis of any media collection. In general, feature selection strongly depends on the data and commonly requires expert knowledge and previous experiments in related application scenarios. Current unsupervised feature selection methods usually ignore existing relationships among components of multi-dimensional features (group features) and operate on single feature components. In most applications, features carry little semantics. Thus,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 51 publications
(70 reference statements)
0
5
0
Order By: Relevance
“…One such situation is the analysis of variables that are interrelated based on correlations or contextual similarities, since ignoring such group structure reduces the stability, consistency, and interpretability of the selection 3 . Methods accounting for group structures have been investigated since at least 1999 4 and have been used in a broad range of applications, including media classification, 5 disease prediction, 6 automotive engineering, 7 voting behavior analysis, 8 emotion recognition, 9 and credit risk analysis 10 . One of the most common applications is in omics research, such as gene expression microarray or single nucleotide polymorphism data 11‐15 .…”
Section: Introductionmentioning
confidence: 99%
“…One such situation is the analysis of variables that are interrelated based on correlations or contextual similarities, since ignoring such group structure reduces the stability, consistency, and interpretability of the selection 3 . Methods accounting for group structures have been investigated since at least 1999 4 and have been used in a broad range of applications, including media classification, 5 disease prediction, 6 automotive engineering, 7 voting behavior analysis, 8 emotion recognition, 9 and credit risk analysis 10 . One of the most common applications is in omics research, such as gene expression microarray or single nucleotide polymorphism data 11‐15 .…”
Section: Introductionmentioning
confidence: 99%
“…Many unsupervised feature selection methods, both similarity preserving (filter) [9,19] and embedded [6,8,10,14] methods, are based on input data alone and rarely take the advantage of the external sources of knowledge about feature group structures. The feature groups used by some feature selection methods are also formed with input data [15,18]. Some domain specific unsupervised methods [3] are proposed for selecting genes from different gene groups, yet they do not combine group based feature selection with instance-feature data which is also useful for feature selection.…”
Section: Related Workmentioning
confidence: 99%
“… Collinearity tolerance : Highly correlated predictors are treated alike since it is assumed that high correlation implies similarity in content, and interpretability is enhanced if those variables are selected jointly (Zou & Hastie, 2005; Bondell & Reich, 2008; Dormann et al., 2013). Group‐level consistency : Once a feature of a prespecified group is added to the model, all variables of that group are included, since it is likely that the predictors of a group are only meaningful together (Zaharieva et al., 2017; Yuan & Lin, 2006; Breheny & Huang, 2015; Gregorutti et al., 2015). …”
Section: Introductionmentioning
confidence: 99%
“…When predictors form groups based on their correlation or contextual similarity, related information is spread over numerous features. This makes selection at the group level more appropriate than identification of individual variables (Subrahmanya & Shin, 2010; Zaharieva et al., 2017). Example scenarios are when a set of features was collected with the same measurement instrument, was derived in different ways from the same data, or originated from the same experimental setting.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation