2022
DOI: 10.1016/j.dsp.2022.103686
|View full text |Cite
|
Sign up to set email alerts
|

One to multiple mapping dual learning: Learning multiple signals from one mixture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…We compare quantitatively and qualitatively the proposed Transformer-guided GAN with the frontier algorithms FastICA [ 2 ], NMF [ 4 ], Neural egg separation (NES) [ 24 ], AGAN [ 12 ], and PDualGAN [ 11 ] to indicate its superiority on SCBIS tasks. To ensure a fair comparison, the AGAN has the same UNet-GAN structure parameters as configured in this paper, except that the channel reduction factor k is set to 8 in the self-attentive module; the PDualGAN consists of 2 DualGANs, each using 2 UNet-GANs in the same configuration as this paper to implement mix to source mapping.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We compare quantitatively and qualitatively the proposed Transformer-guided GAN with the frontier algorithms FastICA [ 2 ], NMF [ 4 ], Neural egg separation (NES) [ 24 ], AGAN [ 12 ], and PDualGAN [ 11 ] to indicate its superiority on SCBIS tasks. To ensure a fair comparison, the AGAN has the same UNet-GAN structure parameters as configured in this paper, except that the channel reduction factor k is set to 8 in the self-attentive module; the PDualGAN consists of 2 DualGANs, each using 2 UNet-GANs in the same configuration as this paper to implement mix to source mapping.…”
Section: Resultsmentioning
confidence: 99%
“…where denotes the dot product operation, √ d k is used to scale the dot into the result to make the gradient of the model more stable, and softmax is used calculate the attention score by normalization, as shown in Equation (10). Finally, the output of MHA is obtained by concatenating the attention results of the obtained individual head vectors, as shown in Equation (11).…”
Section: Transformer Schemementioning
confidence: 99%
See 1 more Smart Citation