Companion of the the Web Conference 2018 on the Web Conference 2018 - WWW '18 2018
DOI: 10.1145/3184558.3186355
|View full text |Cite
|
Sign up to set email alerts
|

Learning the Chinese Sentence Representation with LSTM Autoencoder

Abstract: This study 1 retains the meanings of the original text using Autoencoder (AE) in this regard. This study uses the different loss (includes three types) to train the neural network model, hopes that after compressing sentence features, it can still decompress the original input sentences and classify the correct targets, such as positive or negative sentiment. In this way, it supposed to get the more relative features (compressing sentence features) in the sentences to classify the targets, rather than using th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Motivated by [17] and [18], as well as our work in a different domain [19], we propose a novel LSTM denoising auto-encoder for modulation classification wherein the auto-encoder and classifier are trained simultaneously. Fig.…”
Section: An Lstm Denoising Auto-encodermentioning
confidence: 99%
“…Motivated by [17] and [18], as well as our work in a different domain [19], we propose a novel LSTM denoising auto-encoder for modulation classification wherein the auto-encoder and classifier are trained simultaneously. Fig.…”
Section: An Lstm Denoising Auto-encodermentioning
confidence: 99%
“…where α is the weight parameter for the losses. Thanks to the reconstruction loss in the combined loss function, the LSTM parameters will be optimized so that the state vector encodes the necessary information to reconstruct the sequences and the features discriminating the anomalous inputs [34]. Although the RNNs were used as an unsupervised feature extracting stage in conjunction with regression and classification stages, thanks to the SVDD, our approach is able to remain fully unsupervised.…”
Section: Joint Optimization Of the Feature Extraction And The Outlier...mentioning
confidence: 99%
“…Randomly initialize the One-Class Classifier. while X arrives do H (33) Compute parameter update (34) end while Although Algorithm 1 requires the whole training set for the parameter optimization, we introduce an algorithm for the online setup where we update the model parameters with a single observed sequence. This procedure is given in Algorithm 2.…”
Section: Algorithm 2 Online Anomaly Detection Training Algorithmmentioning
confidence: 99%