2021
DOI: 10.1155/2021/8545686
|View full text |Cite|
|
Sign up to set email alerts
|

[Retracted] English Grammar Detection Based on LSTM‐CRF Machine Learning Model

Abstract: Deep learning and neural network have been widely used in the field of speech, vocabulary, text, pictures, and other information processing fields, which has achieved excellent research results. Neural network algorithm and prediction model were used in this paper for the study and exploration of English grammar. Aiming at the application requirements of English grammar accuracy and standardization, we proposed a machine learning model based on LSTM-CRF to detect and analyze English grammar. This paper briefly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…The LSTM model is used to deal with the problem of sequence labeling, which can make full use of the information of the entire text sequence, including the relationship information between each word, and use this information for the processing of each word [ 23 , 24 ]. An LSTM model contains many LSTM cells, each LSTM cell contains an input gate, an output gate, a forget gate, and a memory cell [ 25 , 26 ].…”
Section: Our Methodsmentioning
confidence: 99%
“…The LSTM model is used to deal with the problem of sequence labeling, which can make full use of the information of the entire text sequence, including the relationship information between each word, and use this information for the processing of each word [ 23 , 24 ]. An LSTM model contains many LSTM cells, each LSTM cell contains an input gate, an output gate, a forget gate, and a memory cell [ 25 , 26 ].…”
Section: Our Methodsmentioning
confidence: 99%
“…There are many techniques that can be used in this phase since every dataset and every model has its own functionality, hence the different dataset may require different techniques. For instance, work by [8] used 3 different techniques which are Word Segmentation, Character Digitization and Vector Construction while work by [10] only used Normalization technique.…”
Section: Preprocessingmentioning
confidence: 99%
“…In order for an accurate classification, features must be relevant as well. For example, work by [8], uses the LSTM algorithm with an additional feature of CRF on top of it. While work by [9] mentioned the features used such as "Prepay transactions" and "Letters of credit", in the FS-WOA-DNN algorithm research.…”
Section: Features and Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…This article has been retracted by Hindawi following an investigation undertaken by the publisher [ 1 ]. This investigation has uncovered evidence of one or more of the following indicators of systematic manipulation of the publication process: Discrepancies in scope Discrepancies in the description of the research reported Discrepancies between the availability of data and the research described Inappropriate citations Incoherent, meaningless and/or irrelevant content included in the article Peer-review manipulation …”
mentioning
confidence: 99%