2024
DOI: 10.1109/tai.2022.3229653
|View full text |Cite
|
Sign up to set email alerts
|

Toward Deep Generation of Guided Wave Representations for Composite Materials

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…This distribution can be randomly sampled and processed through the trained decoder, enabling the generation of new realistic phase space projections across different modules. In comparison to Generative Adversarial Networks (GANs), VAEs, including CVAE, avoid issues like mode collapse and are generally easier to model and train more efficiently 64 . The ability to generate new data is crucial not only for increasing the dataset size for robust diagnostics but also for problems like system parameter estimation and control 24,65 .…”
Section: Generative Abilitymentioning
confidence: 99%
“…This distribution can be randomly sampled and processed through the trained decoder, enabling the generation of new realistic phase space projections across different modules. In comparison to Generative Adversarial Networks (GANs), VAEs, including CVAE, avoid issues like mode collapse and are generally easier to model and train more efficiently 64 . The ability to generate new data is crucial not only for increasing the dataset size for robust diagnostics but also for problems like system parameter estimation and control 24,65 .…”
Section: Generative Abilitymentioning
confidence: 99%
“…We train the Convo-Implicit VAE to minimize the discrepancy between the reconstructed and input SDFs by reducing their mean squared error [24]. Additionally, the latent space (fig.…”
Section: The Latent Space Z and The Projected Coordinates C Are Conca...mentioning
confidence: 99%
“…Note that the reconstruction will not be exact. The VAE is trained to minimize the difference between the output and input [66]. This involves optimizing the weights associated with the encoder and decoder.…”
Section: Variational Auto-encodersmentioning
confidence: 99%
“…Here, β is set to 10 −7 [66]. To achieve a stable convergence, the geometric parameters, contact area and volume fraction are normalized linearly between 0 and 1, while the permeability components are scaled logarithmically due to significant variation in magnitude.…”
Section: Variational Auto-encodersmentioning
confidence: 99%