2022
DOI: 10.48550/arxiv.2202.09318
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DataMUX: Data Multiplexing for Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The top row shows the results on the test dataset, the bottom row shows the results on the test dataset with augmentations. The base is a pointwise single network, Ensemble [16], MC Dropout [8], DUN [1], EE [22], MIMO [11], DataMux [20], MIMMO γ does not optimize θ and MIMMO.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The top row shows the results on the test dataset, the bottom row shows the results on the test dataset with augmentations. The base is a pointwise single network, Ensemble [16], MC Dropout [8], DUN [1], EE [22], MIMO [11], DataMux [20], MIMMO γ does not optimize θ and MIMMO.…”
Section: Methodsmentioning
confidence: 99%
“…However, the applicability of [24] to M > 2 is unclear along with its confidence calibration. The authors of [20] have demonstrated the advantages of per-member non-linear encoding and decoding the inputs and outputs of a MIMO network. Nevertheless, their method was aimed primarily towards vision transformers [5] and they were not able to provide substantial gains on CNNs without significant hardware overhead.…”
Section: Related Workmentioning
confidence: 99%