2019 IEEE International Symposium on Information Theory (ISIT) 2019
DOI: 10.1109/isit.2019.8849561
|View full text |Cite
|
Sign up to set email alerts
|

A Method to Improve Consensus Averaging using Quantized ADMM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…1-(b). For a large scale network, the communication cost often becomes dominant compared to the computation time, calling for communication efficient distributed optimization [16], [17], [18], [19], [20], [21].…”
Section: Related Work and Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…1-(b). For a large scale network, the communication cost often becomes dominant compared to the computation time, calling for communication efficient distributed optimization [16], [17], [18], [19], [20], [21].…”
Section: Related Work and Contributionsmentioning
confidence: 99%
“…To reduce the bandwidth/power usage per channel use, decreasing communication payload sizes is one popular solution, which is enabled by gradient quantization [22], model parameter quantization [23], [21], and model output exchange for large-sized models via knowledge distillation [24]. To reduce w1 2 w2 wN + < l a t e x i t s h a 1 _ b a s e 6 4 = " W E H T a G L r 3 n 1 H z n r + 2 7 X B Z J e j Y d g = " > A A A X N X i c n d j b b u P G G Q d w J U 3 b 1 D 1 t G s A 3 B Q o h 3 g W 8 r b u Q k x Y B F i g Q W / L Z l m 3 5 I B 9 o C 8 P h U O K K Q 3 L J o Q 8 7 Y F 8 g t + m z 9 F l y k b u i t 3 2 F D s n h / E n a 7 U U E 7 I r z + 7 4 Z z p A f D 7 I d + V 4 i e r 3 v P / r 4 J 5 / 8 9 G c / / / Q X C 7 / 8 1 a 9 / 8 9 s X n / 3 u P A n T m L I z G v p h f G G T h P l e w M 6 E J 3 x 2 E c W M c N t n Y 3 v e z + P j O x Y n X h i c i s e I 3 X A y D T z X o 0 Q o O v 7 T 5 M V S 7 0 2 v + H S f b q z q j a W O / h x N P l v 8 g + W E N O U s E N Q n S X K 9 2 o v E j S S x 8 K j P s o W F B S t N W E T o n E y Z d E l A H 2 d O v N J l U e J 6 0 / J 7 p U t 4 I m a 8 + O Z E z I q N 5 J H b x Y Y b B i J Z 6 S a p r X q k M V v p F g v N G i O L W c x U I x a C 2 P l + a 7 H r I C T O u z R R 0 6 K e a M a k U w z f N B a o 9 c S k n U p D n i 9 T o R d Q P 3 W Y n D H i s N Z E e O o L L w 7 v m 2 p 7 0 0 T E q e q 8 Y A X s X s x Y G D M u 9 X c m T / V G I 6 r 6 B u G 9 z 5 w p K 3 Z 8 r Q M 3 c q 0 V W X j V 6 O d P w 9 h T B 7 T e w 1 h z F w 9 e 2 E g r 2 o 0 U q u q p l t H P m 8 0 E n 3 j 1 M f p F u 5 k S 5 k c s L 7 t 6 H r C d 7 H j i S a 6 2 d u o 7 R o U q i m Z u h c 1 k t X 4 W t 8 Y 1 1 k h 1 m O s F 7 T k M g I 1 k 9 k B 4 5 N d n s K G l l c Z i 6 i X N P E 2 N R J 9 x T j K 5 X 3 y 1 l h u H v k / i x 0 w t s t p s Z A S h I K 1 p D y t q J E Z x q K 6 T + m k 7 0 t J O i 8 K k X H S e Y R q N L P W P x P N M j s r v R i w J / b Q 1 o 5 O K m o m p W q 9 a T y 1 P S 7 N i E 5 U Y l R N a w / b T M 5 J J y 3 a 7 z 5 6 K f P F u G P M y x S x c X U X q x F v l a q S 0 g t A L H H…”
Section: Related Work and Contributionsmentioning
confidence: 99%
“…c) Payload Size Reduction: To reduce the communication payload size per link, model updates are quantized under centralized [23]- [25] and decentralized network topologies [12], [19], [26]. Alternatively, the entries of model updates can be partially dropped as shown under centralized [27] and decentralized architectures [28].…”
Section: Introductionmentioning
confidence: 99%
“…In this respect, for each communication round, sparsifying the number of communication links can reduce the communication cost without compromising the accuracy. To this end, link censoring for negligible model updates is applied under centralized[21] and decentralized network topologies[11],[22].c) Payload Size Reduction: To reduce the communication payload size per link, model updates are quantized under centralized[23]-[25] and decentralized network topologies[12],[19],[26]. Alternatively, the entries of model updates can be partially dropped as shown under centralized[27] and decentralized architectures[28].…”
mentioning
confidence: 99%