2021
DOI: 10.1007/978-3-030-69449-4_7
|View full text |Cite
|
Sign up to set email alerts
|

High-Performance Linguistic Steganalysis, Capacity Estimation and Steganographic Positioning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…Jiao et al [26] took this a step further by introducing a multi-head attention mechanism, connecting word representations with a multi-headed self-attentive representation for further classification. Subsequently, Zou et al [27] employed Bidirectional Encoder Representation from Transformers (BERT) and Global Vectors for Word Representation (Glove) to capture inter-sentence contextual association relationships, then extracted context information using Bi-LSTM and finally obtained the sensitive semantic features via the attention mechanism for steganographic text detection. Xu et al [28] employed a pre-trained BERT language model to obtain initial contextually relevant word representation, after which the extracted features were fed into an LSTM with attention to obtain the final sentence representation used to classify the detected texts.…”
Section: Related Workmentioning
confidence: 99%
“…Jiao et al [26] took this a step further by introducing a multi-head attention mechanism, connecting word representations with a multi-headed self-attentive representation for further classification. Subsequently, Zou et al [27] employed Bidirectional Encoder Representation from Transformers (BERT) and Global Vectors for Word Representation (Glove) to capture inter-sentence contextual association relationships, then extracted context information using Bi-LSTM and finally obtained the sensitive semantic features via the attention mechanism for steganographic text detection. Xu et al [28] employed a pre-trained BERT language model to obtain initial contextually relevant word representation, after which the extracted features were fed into an LSTM with attention to obtain the final sentence representation used to classify the detected texts.…”
Section: Related Workmentioning
confidence: 99%
“…Niu et al [35] combined a bidirectional LSTM (Bi-LSTM) neural network and CNN network to capture local and long-term semantic features. Zou et al [59] employed the BERT and Glove component to extract the word features, then a Bi-LSTM with attention mechanism was used to extract contextual relation. Peng et al [36] proposed a real-time text steganalysis method and used transfer learning to improve detection accuracy.…”
Section: Text Steganalysismentioning
confidence: 99%
“…However, this type of calculation method lacks the structure information between entities, making it difficult to measure the similarity between the two cases from a semantic point of view. Another option to define similarity measures for cases is through the use of embeddings (Zou et al, 2020). For example, Xu et al (2021) used the word2vec model to obtain semantic information from fault data for fault classification.…”
Section: Introductionmentioning
confidence: 99%