2010
DOI: 10.1109/tip.2009.2038764
|View full text |Cite
|
Sign up to set email alerts
|

Learning with $\ell ^{1}$-graph for image analysis

Abstract: The graph construction procedure essentially determines the potentials of those graph-oriented learning algorithms for image analysis. In this paper, we propose a process to build the so-called directed l1-graph, in which the vertices involve all the samples and the ingoing edge weights to each vertex describe its l1-norm driven reconstruction from the remaining samples and the noise. Then, a series of new algorithms for various machine learning tasks, e.g., data clustering, subspace learning, and semi-supervi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
438
0
3

Year Published

2010
2010
2022
2022

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 596 publications
(442 citation statements)
references
References 17 publications
1
438
0
3
Order By: Relevance
“…g(z n |x n , Λ) log w zn (8) Note that the marginal distribution p(x n |B zn ) is difficult to evaluate due to the integration. We then simplify it by using the mode of the posterior distribution of α n :…”
Section: Learning Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…g(z n |x n , Λ) log w zn (8) Note that the marginal distribution p(x n |B zn ) is difficult to evaluate due to the integration. We then simplify it by using the mode of the posterior distribution of α n :…”
Section: Learning Algorithmmentioning
confidence: 99%
“…Originally applied to modeling the human vision cortex [1] [2], sparse coding approximates the input signal, x ∈ R d , in terms of a sparse linear combination of an over-complete bases or dictionary B ∈ R d×D , where d < D. Among different ways of sparse coding, the one derived by 1 norm minimization attracts most popularity, due to its coding efficiency with linear programming, and also its relationship to the NPhard 0 norm in compressive sensing [3]. The applications of sparse coding range from image restorations [4] [5], machine learning [6] [7] [8], to various computer vision tasks [9] [10] [11] [12]. Many efficient algorithms aiming to find such a sparse representation have been proposed in the past several years [13].…”
Section: Introductionmentioning
confidence: 99%
“…i.e., local Fisher discriminant analysis (LFDA) [6], NPE [11], SGE [12], and BSGDA [10], were also implemented for comparison. The codes for LFDA 2 and NPE 3 were downloaded online.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…In [3], a new method which integrates the spatial and spectral information of the HSI was proposed to learn a local discriminant graph. In recent years, sparse representation has been exploited to produce a graph whose edges are intended to be sparse [12]. This sparse graph embedding (SGE) explores the linearity structure of the data, and has been widely used in HSI DR. Ly et al [10] proposed block sparse graph based discriminant analysis (BSGDA), which learns a block sparse graph for supervised DR.…”
Section: Introductionmentioning
confidence: 99%
“…Many related works have been developed, such as the l 1 -graph for image classification [8], kernel based SRC [9], Gabor feature based SRC [10,48], robust sparse coding (RSC) [24], robust alignment with sparse and low rank decomposition [25], joint dimension reduction and dictionary learning [49], face and ear multimodal biometric system [50], etc. In particular, the RSC method [24] has shown excellent results in FR with various occlusions.…”
mentioning
confidence: 99%