2020
DOI: 10.1049/iet-ipr.2019.1027
|View full text |Cite
|
Sign up to set email alerts
|

Joint low‐rank project embedding and optimal mean principal component analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…Similar to the method in [46], suppose that E (t ) , S (t ) , F (t ) , 𝚲 (t ) and 𝚫 (t ) are the values of S, E, F, 𝚲 and 𝚫 at the tth iteration, respectively, then the update of the variable S at the (t + 1)th iteration is:…”
Section: 21mentioning
confidence: 99%
See 1 more Smart Citation
“…Similar to the method in [46], suppose that E (t ) , S (t ) , F (t ) , 𝚲 (t ) and 𝚫 (t ) are the values of S, E, F, 𝚲 and 𝚫 at the tth iteration, respectively, then the update of the variable S at the (t + 1)th iteration is:…”
Section: 21mentioning
confidence: 99%
“…Similar to the method in [46], suppose that boldE(t)${{\bf{E}}^{(t)}}$, boldS(t)${{\bf{S}}^{(t)}}$, boldF(t)${{\bf{F}}^{(t)}}$, Λ(t)${{\bm{\Lambda }}^{(t)}}$ and Δ(t)${{\bm{\Delta }}^{(t)}}$ are the values of S , E , F , Λ and Δ at the t th iteration, respectively, then the update of the variable S at the ( t + 1)th iteration is: boldS(t+1)=boldU(t)diagmax()σ1(t)β(μηZ)1,0,)maxσ2false(tfalse)βfalse(μηZfalse)1,0,,maxσrfalse(tfalse)βfalse(μηZfalse)1,0)false(boldV(t)T$$\begin{eqnarray} &&{\mathbf{S}}^{(t+1)} = {\mathbf{U}}^{(t)}\mathrm{diag}\left(\max \left({\sigma}_{1}^{(t)}-\beta (\mu {\eta}_{Z})^{-1},0\right),\right.\nonumber\\ &&\quad\left. \max \left({\sigma}_{2}^{(t)}-\beta (\mu {\eta}_{Z})^{-1},0\right),\ldots,\max \left({\sigma}_{r}^{(t)}-\beta (\mu {\eta}_{Z})^{-1},0)\right)({\mathbf{V}}^{(t)}\right)^{T}\nonumber\\ \end{eq...…”
Section: Non‐negative Low‐rank and Adaptive Preserving Smrmentioning
confidence: 99%
“…By linearizing the quadratic term in () at S (t) and adding a proximal term [39], this problem becomes minSβboldSbadbreak+μηX2SboldS(t)+1ηXtrueXTfalse(trueXSfalse(tfalse)Afalse(tfalse)false)F2,$$\begin{equation}\mathop {\min }\limits_{\bf{S}} \beta {\left\| {\bf{S}} \right\|_*} + \frac{{\mu {\eta _{\tilde X}}}}{2}\left\| {{\bf{S}} - {{\bf{S}}^{(t)}} + \frac{1}{{{\eta _{\tilde X}}}}{{{{\tilde {\bf X}}}}^T}({{\tilde {\bf X}}}{{\bf{S}}^{(t)}} - {{\bf{A}}^{(t)}})} \right\|_F^2{\rm{ ,}}\end{equation}$$where ηX>σmax2(boldX)${\eta _{\tilde X}} > \sigma _{\max }^2({{\tilde {\bf X}}})$. Suppose that Bfalse(tfalse)=UrtΣrtVrtT${{\bf{B}}^{(t)}} = {{\bf{U}}_{{r_t}}}{{\bm{\Sigma }}_{{r_t}}}{\bf{V}}_{{r_t}}^T$ is the truncated singular value decomposition of the matrixBfalse(tfalse)=Sfalse(tfalse)1ηtrueXboldXT(boldXboldS(t)boldA(t))${{\bf{B}}^{(t)}} = {{\bf{S}}^{(t)}} - \frac{1}{{{\eta _{\tilde X}}}}{{{\tilde {\bf X}}}^T}({{\tilde {\bf X}}}{{\bf{S}}^{(t)}} - {{\bf{A}}^{(t)}})$, whereΣr…”
Section: Model and Algorithmmentioning
confidence: 99%
“…By linearizing the quadratic term in (14) at S (t) and adding a proximal term [39], this problem becomes…”
Section: Algorithmmentioning
confidence: 99%
“…Feature extraction methods are divided into two main groups: unsupervised and supervised [9]. Unsupervised feature extraction methods such as principal component analysis (PCA) [10] do not use the information of class labels. So, they often ignore discrimination among different classes.…”
Section: Introductionmentioning
confidence: 99%