Proceedings of the 2008 ACM Symposium on Applied Computing 2008
DOI: 10.1145/1363686.1363966
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised dimensionality reduction in image feature space

Abstract: Image feature space is typically complex due to the high dimensionality of data. Effective handling of this space has prompted many research efforts in the study of dimensionality reduction in the image domain. In this paper, we propose a semisupervised reduction method that leverages relevance feedback information in the retrieval process to learn suitable linear and orthogonal embeddings. In the reduced space constructed by the proposed embedding, relevant images are kept close to each other, while irrelevan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2010
2010
2015
2015

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…Through conducting many experiments on five popular multi-label datasets, the novel MSDA algorithm was compared with other dimensionality reduction methods: ORI (original dataset), PCA [2], LTSA [2], MDDM [9], and MLSI [8]. In particular, ORI as a baseline deals with the original dataset directly.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Through conducting many experiments on five popular multi-label datasets, the novel MSDA algorithm was compared with other dimensionality reduction methods: ORI (original dataset), PCA [2], LTSA [2], MDDM [9], and MLSI [8]. In particular, ORI as a baseline deals with the original dataset directly.…”
Section: Methodsmentioning
confidence: 99%
“…Parameters used in experiments were set as follows: LPP, LTSA: k=10; MLSI was the same as that in Ref. [8]: β=0.5; MSDA: the default labeled samples ratio was 30%, α=0. 8, β=0.85, the number of neighbors of each sample was 10; and multi-label k nearest neighbor (MLKNN): k=10 [9].…”
Section: Environment Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…Dimensionality Reduction [3], Data Co-Reduction [5] and Hashing methods [4,9], to accelerate the search process of the high-dimensional image vectors, a.k.a. descriptor vectors.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, several approximate similarity search strategies were proposed, performing an approximation of the visual nearest neighbors of sequential search, by highly reducing the computational cost. For example, approximate similarity search strategies are the Dimensionality Reduction [Cheng et al 2008;Liang et al 2010; Wang and Binbin 2011], Data Co-Reduction [Huang et al 2011], Vantage Indexing [Bozkaya and Ozsoyoglu 1999;Fu et al 2000;Van Leuken and Veltkamp 2011] and Hashing methods [Gionis et al 1999;Jegou et al 2011;Heo et al 2012;Liu et al 2014]. Recently, the MSIDX method [Tiakas et al 2013] exploited a new key factor of the image descriptor vectors, namely the dimensions value cardinalities.…”
Section: Introductionmentioning
confidence: 99%