2021
DOI: 10.1002/smll.202100181
|View full text |Cite
|
Sign up to set email alerts
|

Predictability of Localized Plasmonic Responses in Nanoparticle Assemblies

Abstract: Design of nanoscale structures with desired optical properties is a key task for nanophotonics. Here, the correlative relationship between local nanoparticle geometries and their plasmonic responses is established using encoder‐decoder neural networks. In the im2spec network, the relationship between local particle geometries and local spectra is established via encoding the observed geometries to a small number of latent variables and subsequently decoding into plasmonic spectra; in the spec2im network, the r… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

6
2

Authors

Journals

citations
Cited by 24 publications
(27 citation statements)
references
References 26 publications
0
27
0
Order By: Relevance
“…The autoencoder concept can be extended towards learning correlative relationships between structure in an image and property in spectral data as has been demonstrated with the im2spec encoder-decoder models. 209 Finally, transformation-invariant variational autoencoders (VAE) build upon classical autoencoders by making the reconstruction process probabilistic and incorporating prior knowledge into the latent space structure 210 . Figure 4 shows the application of rotationally invariant VAE to the analysis of graphene data.…”
Section: [H3] Autoencodersmentioning
confidence: 99%
“…The autoencoder concept can be extended towards learning correlative relationships between structure in an image and property in spectral data as has been demonstrated with the im2spec encoder-decoder models. 209 Finally, transformation-invariant variational autoencoders (VAE) build upon classical autoencoders by making the reconstruction process probabilistic and incorporating prior knowledge into the latent space structure 210 . Figure 4 shows the application of rotationally invariant VAE to the analysis of graphene data.…”
Section: [H3] Autoencodersmentioning
confidence: 99%
“…It was concluded that the encoder-decoder network with a high dimensional latent space (10D) was able to yield better accuracy compared to a 2D latent space. Noticeably, the use of latent space allowed transferable ML cognition that could be used by other data sets [90].…”
Section: For Property-predictionmentioning
confidence: 99%
“…The bidirectional network was made possible by a type of representation learning method: the auto-encoding neural network. The network encoder, through multiple non-linear layers of abstraction, represents the input data in the form of latent space, which then is transformed to the output data by the decoder [90]. Besides the DNN powered data representation, He et al further simplified the near field (electric-field enhancements) response by cherry-picking and downsizing the collected data.…”
Section: Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, a subset of the current authors have previously shown that autoencoders can be utilized to learn the mapping between imaging and spectral response. 39,40 These studies, in addition to basic physics considerations, would suggest that incorporating the knowledge of the domain structure should be highly beneficial in optimizing the sampling process. To improve on the process, we utilized a secondary approach where, in addition to utilizing the measured spectral values in the GP surrogate model, we also utilize the high-resolution imaging data itself as a secondary input channel, to aid predictions.…”
Section: Prior Knowledge Incorporationmentioning
confidence: 99%