2022
DOI: 10.48550/arxiv.2203.12868
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DyRep: Bootstrapping Training with Dynamic Re-parameterization

Abstract: Structural re-parameterization (Rep) methods achieve noticeable improvements on simple VGG-style networks. Despite the prevalence, current Rep methods simply reparameterize all operations into an augmented network, including those that rarely contribute to the model's performance. As such, the price to pay is an expensive computational overhead to manipulate these unnecessary behaviors. To eliminate the above caveats, we aim to bootstrap the training with minimal cost by devising a dynamic re-parameterization … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…In addition, the performance of re-parameterization models is closely related to the type of re-parameterization operations. Some previous work [28,29] search the best combination of re-parameterization operations automatically, but the type of re-parameterization operations limits the upper limit of network performance. Therefore, the performance of model can be further improved by exploring more re-parameterization operations.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, the performance of re-parameterization models is closely related to the type of re-parameterization operations. Some previous work [28,29] search the best combination of re-parameterization operations automatically, but the type of re-parameterization operations limits the upper limit of network performance. Therefore, the performance of model can be further improved by exploring more re-parameterization operations.…”
Section: Introductionmentioning
confidence: 99%
“…The advent of automatic feature engineering fuels deep neural networks to achieve remarkable success in a plethora of computer vision tasks, such as image classification [16,18,37,47,52], object detection [2,22], and semantic segmentation [5,53]. In the path of pursuing better performance, current deep learning models generally grow deeper and wider [12,44].…”
Section: Introductionmentioning
confidence: 99%