2020
DOI: 10.3390/pr8070751
|View full text |Cite
|
Sign up to set email alerts
|

Product Quality Detection through Manufacturing Process Based on Sequential Patterns Considering Deep Semantic Learning and Process Rules

Abstract: Companies accumulate a large amount of production process data during product manufacturing. Sequence data from the mining production process can enable a company to evaluate the manufacturing process, to find the key factors affecting product quality, and to improve product quality. However, the production process mainly exists in the form of text. To solve this problem, we propose a novel frequent pattern mining algorithm (EABMC) based on the text context semantics and rules of the manufacturing process to r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…Compared with LSA model, the LDA [28,29] model focuses on constructing the mapping relationship between the above three variable spaces from the perspective of probability distribution, to avoid the huge amount of matrix operation caused by singular value decomposition operation. By introducing the relevant theories of Dirichlet distribution and multinomial distribution [30,31], the LDA model can carry out latent semantic mining from the perspective of probability distribution [32][33][34].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Compared with LSA model, the LDA [28,29] model focuses on constructing the mapping relationship between the above three variable spaces from the perspective of probability distribution, to avoid the huge amount of matrix operation caused by singular value decomposition operation. By introducing the relevant theories of Dirichlet distribution and multinomial distribution [30,31], the LDA model can carry out latent semantic mining from the perspective of probability distribution [32][33][34].…”
Section: Literature Reviewmentioning
confidence: 99%
“…The emergence of pre-training has accelerated the development of word representation, and solve the problem of one word has multi-meaning. ELMo [18] is a pretrained contextual word embedding model, which uses a BiLSTM language model consisting of a forward and a backward language model. The goal of GPT (Generative Pre-Training) [19] is to learn a general representation that can be applied to many tasks.…”
Section: Word Representation Technologiesmentioning
confidence: 99%
“…In natural language processing problems, each word is influenced by its front and back words, so the textual context must be taken into account [18]. Therefore, in this paper, BiLSTM is used for Here, σ is the activation function sigmoid; ⊗ is a point multiplication operation; tanh is a hyperbolic tangent activation function; x t is the unit input, i t , f t , o t are the input gate, forget gate, and output gate at moment t; w, b are the weight matrix and bias vector of the input gate, forget gate, and output gate; c t is the state at moment t; and h t is the output at moment t.…”
Section: Feature Extraction Based On Attbilstmmentioning
confidence: 99%
See 1 more Smart Citation
“…At present, the target detection algorithm with a CNN as the backbone network has made major breakthroughs in various fields, and the target detection algorithm based on deep learning is also the closest task to product quality monitoring in the traditional sense. Structurally, this detection method can be divided into a two-stage network represented by the faster-region-based CNN (RCNN) [16] and a one-stage network represented by the You Only Look Once (YOLO) series [17][18][19][20] and the single-shot detector (SSD) [21]. The former first trains and generates proposal boxes that may contain the target and then further obtains the detection result, and the latter directly uses the features extracted from the backbone to predict the target category and location.…”
Section: Introductionmentioning
confidence: 99%