ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9053947
|View full text |Cite
|
Sign up to set email alerts
|

Sequence-To-Subsequence Learning With Conditional Gan For Power Disaggregation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
61
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(61 citation statements)
references
References 7 publications
0
61
0
Order By: Relevance
“…EnerGAN++ is a robust to noise model and achieves good performance, regardless of noise, in contrast to the SEQ2SUB model. SEQ2SUB model achieves good results in normal cases as shown in [31], however, given noise aggregate signal input, its performance is degraded (Fig. 9).…”
Section: Robustness To Noisementioning
confidence: 79%
See 3 more Smart Citations
“…EnerGAN++ is a robust to noise model and achieves good performance, regardless of noise, in contrast to the SEQ2SUB model. SEQ2SUB model achieves good results in normal cases as shown in [31], however, given noise aggregate signal input, its performance is degraded (Fig. 9).…”
Section: Robustness To Noisementioning
confidence: 79%
“…9. Comparison between EnerGAN++ model results provided after applying Gaussian noise to the aggregate signal and the respective results of the disaggregation using the sequence to sub-sequence conditional GAN model of [31]. The small MAE error values provide better performance.…”
Section: Robustness To Noisementioning
confidence: 99%
See 2 more Smart Citations
“…Some authors also normalized the target values for the training of the DNNs. While some publications mention that different normalization strategies were tried out, only two studies report on the influence of normalization strategies on training efficiency and testing performance: [71] finds that instance normalization [72] performs better than batch normalization [73] and [69] concludes that L 2 -normalization works best.…”
Section: Preprocessingmentioning
confidence: 99%