2020
DOI: 10.1016/j.neucom.2020.01.064
|View full text |Cite
|
Sign up to set email alerts
|

Siamese capsule networks with global and local features for text classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(11 citation statements)
references
References 8 publications
0
11
0
Order By: Relevance
“…36,37 Among different one-shot learning techniques, Siamese Neural Network (SNN) 38 attracts much attention and provides state-of-art performance in image recognition 39 and bearing fault diagnosis. 40 Particularly, combining the CapsNets and SNNs, some researchers have proposed the Siamese CapsNets to solve the issues of face recognition 41 and text classification, 42 which can provide a good reference for this investigation. A problem that Siamese CapsNet faces is the extensive training parameters (high computation cost).…”
Section: The Proposed Methodsmentioning
confidence: 99%
“…36,37 Among different one-shot learning techniques, Siamese Neural Network (SNN) 38 attracts much attention and provides state-of-art performance in image recognition 39 and bearing fault diagnosis. 40 Particularly, combining the CapsNets and SNNs, some researchers have proposed the Siamese CapsNets to solve the issues of face recognition 41 and text classification, 42 which can provide a good reference for this investigation. A problem that Siamese CapsNet faces is the extensive training parameters (high computation cost).…”
Section: The Proposed Methodsmentioning
confidence: 99%
“…The work done in [26] utilized Siamese capsule networks as a tool for calculating the similarity of two short sequences of text via embeddings obtained from Bidirectional Gated Recurrent Units (BGRU). Employing a similar approach as the Siamese capsule network proposed for face recognition, the embeddings were first passed through primary capsules to identify parts of the text (a representation of words and phrases).…”
Section: Capsule Networkmentioning
confidence: 99%
“…The convolutional layer of the capsule network is mainly used to extract local features. It uses capsule vectors to represent the spatial position relationship between local features [95]. To make up for the defect that the capsule network does not consider the context relationship of features, we have added an LSTM network before the capsule network to obtain context information of local features.…”
Section: Encoder Layermentioning
confidence: 99%