2018
DOI: 10.1016/j.ipl.2018.03.006
|View full text |Cite
|
Sign up to set email alerts
|

Lie group impression for deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…Anchors are multiple bounding boxes with different sizes and aspect ratios that are generated centered on each pixel of the image. Yang et al (2018) propose a dynamic mechanism named MetaAnchor, which can select appropriate anchor for dynamic generation. Zhang et al (2017) propose a scale compensation anchor matching mechanism to improve the recall rate for tiny objects.…”
Section: Anchor-based Optimizationmentioning
confidence: 99%
“…Anchors are multiple bounding boxes with different sizes and aspect ratios that are generated centered on each pixel of the image. Yang et al (2018) propose a dynamic mechanism named MetaAnchor, which can select appropriate anchor for dynamic generation. Zhang et al (2017) propose a scale compensation anchor matching mechanism to improve the recall rate for tiny objects.…”
Section: Anchor-based Optimizationmentioning
confidence: 99%
“…Yang et al [41] proposed an algorithm for capturing the Lie group manifold structure of visual impression. They developed single-layer Lie group models and stacked them to yield a deep architecture; they also solved the learning problem of network weight by designing a Lie group-based gradient descent algorithm.…”
Section: Lie Group Deep Structure Learningmentioning
confidence: 99%
“…The authors extend this theory to a more generic case by treating a linear layer as an orthogonal-constrained Stiefel manifold and use a Lie group to transform the calculation process into a linear approximation. The approach has been applied to an image classification problem in research by Yang et al [ 32 ] and showed a significant improvement in the optimization step. Inspired by the idea of the method by Nishimori et al, the proposed method in this research constrains a network to Stiefel manifold so that a meta-learner could perform a more stable gradient descent in limited steps and accelerate the adapting process.…”
Section: Related Workmentioning
confidence: 99%