2021
DOI: 10.1016/j.image.2021.116367
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive hypergraph learning with multi-stage optimizations for image and tag recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Let f and H be fixed, bold-italicJ=bold-italicIA1+θ$$ \boldsymbol{J}=\boldsymbol{I}-\frac{\boldsymbol{A}}{1+\theta } $$.f*$$ {\boldsymbol{f}}^{\ast } $$ is obtained through Equation (): f*goodbreak=arg0.5emminfT)(bold-italicf;bold-italicw,bold-italicHgoodbreak=ωJ1bold-italicy$$ {\boldsymbol{f}}^{\ast }=\underset{\boldsymbol{f}}{\arg \kern0.5em \min }T\left(\boldsymbol{f};\boldsymbol{w},\boldsymbol{H}\right)=\omega {\boldsymbol{J}}^{-1}\boldsymbol{y} $$ where ω=θ1+θ$$ \omega =\frac{\theta }{1+\theta } $$. Update w*$$ {\boldsymbol{w}}^{\ast } $$. The optimal w*$$ {\boldsymbol{w}}^{\ast } $$ is obtained through the least mean square method 29 . It is sequentially revised upward along the negative gradient of the optimization function.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Let f and H be fixed, bold-italicJ=bold-italicIA1+θ$$ \boldsymbol{J}=\boldsymbol{I}-\frac{\boldsymbol{A}}{1+\theta } $$.f*$$ {\boldsymbol{f}}^{\ast } $$ is obtained through Equation (): f*goodbreak=arg0.5emminfT)(bold-italicf;bold-italicw,bold-italicHgoodbreak=ωJ1bold-italicy$$ {\boldsymbol{f}}^{\ast }=\underset{\boldsymbol{f}}{\arg \kern0.5em \min }T\left(\boldsymbol{f};\boldsymbol{w},\boldsymbol{H}\right)=\omega {\boldsymbol{J}}^{-1}\boldsymbol{y} $$ where ω=θ1+θ$$ \omega =\frac{\theta }{1+\theta } $$. Update w*$$ {\boldsymbol{w}}^{\ast } $$. The optimal w*$$ {\boldsymbol{w}}^{\ast } $$ is obtained through the least mean square method 29 . It is sequentially revised upward along the negative gradient of the optimization function.…”
Section: Methodsmentioning
confidence: 99%
“…The optimal w à is obtained through the least mean square method. 29 It is sequentially revised upward along the negative gradient of the optimization function. w à is computed by…”
Section: Adaptive Hypergraph Learningmentioning
confidence: 99%
“…In the application of hypergraph learning to recommendation tasks, Georgios Karantaidis et al [52] proposed an optimization method spanning multiple stages using hypergraphs. By the optimization of hypergraph ranking, hypergraph updates, and adaptive edge weight, accurate ranking vectors generated for image and label recommendation.…”
Section: Hypergraph Learningmentioning
confidence: 99%