2020
DOI: 10.4218/etrij.2019-0441
|View full text |Cite
|
Sign up to set email alerts
|

Zero‐anaphora resolution in Korean based on deep language representation model: BERT

Abstract: It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional enc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…Second, previous studies have found that when Korean phrases are classified using both word and grapheme information, classification performance is higher than conventional models that use only word information or grapheme information [19]. In this study, it also appears that a higher performance can be obtained by combining grapheme-and syllable-based techniques.…”
Section: Discussionmentioning
confidence: 53%
See 1 more Smart Citation
“…Second, previous studies have found that when Korean phrases are classified using both word and grapheme information, classification performance is higher than conventional models that use only word information or grapheme information [19]. In this study, it also appears that a higher performance can be obtained by combining grapheme-and syllable-based techniques.…”
Section: Discussionmentioning
confidence: 53%
“…Kim, Ra and Lim [19] proposed BERT model that solves the task of zero anaphora resolution (omission of subject in the sentence) for completely understanding the texts in Korean.…”
Section: Text Mining For Korean Languagementioning
confidence: 99%
“…Pretrained models are commonly used to improve overall network performance [27][28][29][30]. However, true success depends on the nature of the computer vision task and how well the model fits the task.…”
Section: Importance Of the Imagenetpretrained Resnet-18 Networkmentioning
confidence: 99%
“…BERT is bidirectional, unlike Elmo. Transformer encoder-based word Embedding is used 18 . A fine-tune and classify patents with the model.…”
Section: Related Workmentioning
confidence: 99%