2020
DOI: 10.48550/arxiv.2010.07230
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Evasion Attack against Stacked Capsule Autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…41 We have addressed this challenge in our structure−property relationship model by means of a SMILES input affine transformation (AT). 42 The effectiveness of AT for input sequence scaling in ML has been demonstrated recently for multilayer perceptron (MLP), 43 convolutional neural networks (CNN), 44,45 and deep recurrent neural network architectures, such as long short-term memory (LSTM) 46 and fast−slow RNN (FS-RNN). 47 Our work identifies the optimal value of the scaling factor of AT, which significantly enhances the accuracy and efficiency of the parent BP algorithm (ATransformedBP).…”
Section: ■ Methodsmentioning
confidence: 99%
“…41 We have addressed this challenge in our structure−property relationship model by means of a SMILES input affine transformation (AT). 42 The effectiveness of AT for input sequence scaling in ML has been demonstrated recently for multilayer perceptron (MLP), 43 convolutional neural networks (CNN), 44,45 and deep recurrent neural network architectures, such as long short-term memory (LSTM) 46 and fast−slow RNN (FS-RNN). 47 Our work identifies the optimal value of the scaling factor of AT, which significantly enhances the accuracy and efficiency of the parent BP algorithm (ATransformedBP).…”
Section: ■ Methodsmentioning
confidence: 99%