2019
DOI: 10.5626/jok.2019.46.1.22
|View full text |Cite
|
Sign up to set email alerts
|

Korean Dependency Parsing using the Self-Attention Head Recognition Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…For the continuously changing dimensional emotion recognition task, this paper uses dual-stream CNNs to learn spatiotemporal features in expression samples, extracting static episodic features of expression images and temporal features of expression videos, in order to focus on learning discriminative features related to expressions in space and time, and at the same time suppress the impact of other facial regions on expression recognition, so that the two stream CNN network focuses on the areas that need to be concerned, so as to improve the recognition accuracy, this paper introduces attention mechanism. The attention mechanism first achieved good results in the field of natural language processing [13][14] , and then was introduced to the fields of image classification and action recognition [15][16] . In the field of emotion recognition, Zhou [17] proposed regional self-attention, Prabhu [18] proposed an augmented neural network based on the attention mechanism, Xia [19] proposed a multi-scale attention mechanism, Sun [20] added soft attention mechanism after CNN model, Xu [21] proposed a visual attention mechanism.…”
Section: Introductionmentioning
confidence: 99%
“…For the continuously changing dimensional emotion recognition task, this paper uses dual-stream CNNs to learn spatiotemporal features in expression samples, extracting static episodic features of expression images and temporal features of expression videos, in order to focus on learning discriminative features related to expressions in space and time, and at the same time suppress the impact of other facial regions on expression recognition, so that the two stream CNN network focuses on the areas that need to be concerned, so as to improve the recognition accuracy, this paper introduces attention mechanism. The attention mechanism first achieved good results in the field of natural language processing [13][14] , and then was introduced to the fields of image classification and action recognition [15][16] . In the field of emotion recognition, Zhou [17] proposed regional self-attention, Prabhu [18] proposed an augmented neural network based on the attention mechanism, Xia [19] proposed a multi-scale attention mechanism, Sun [20] added soft attention mechanism after CNN model, Xu [21] proposed a visual attention mechanism.…”
Section: Introductionmentioning
confidence: 99%
“…Fig.4 Loss function of model training process Compared with traditional feature based classification models, the performance of label classification based on deep learning is shown in the figure.By modifying the distance of the actual collected signal to verify the performance of the design scheme in this paper, the classification accuracy of the label reaches over 93% when the label recognition distance is above 2m[15][16][17][18] . The experimental results demonstrate the high accuracy and adaptability of the label recognition classification technology based on deep learning as shown in Figure5.…”
mentioning
confidence: 99%