2022
DOI: 10.48550/arxiv.2205.12694
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…We also note that while some recent works (Foret et al, 2020;Na et al, 2022;Rangwani et al, 2022) propose methods that may be utilized to better train quantized models on LT data, actual training of quantized models on LT data is seldom explored.…”
Section: Related Workmentioning
confidence: 99%
“…We also note that while some recent works (Foret et al, 2020;Na et al, 2022;Rangwani et al, 2022) propose methods that may be utilized to better train quantized models on LT data, actual training of quantized models on LT data is seldom explored.…”
Section: Related Workmentioning
confidence: 99%