2021
DOI: 10.4218/etrij.2020-0460
|View full text |Cite
|
Sign up to set email alerts
|

DG‐based SPO tuple recognition using self‐attention M‐Bi‐LSTM

Abstract: This study proposes a dependency grammar‐based self‐attention multilayered bidirectional long short‐term memory (DG‐M‐Bi‐LSTM) model for subject–predicate–object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high‐accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accura… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 49 publications
0
3
0
Order By: Relevance
“…However, large‐scale KBs require considerable effort to expand new knowledge manually. Therefore, studies have been conducted to extract autonomously subject–predicate–object (SPO) tuples from natural language (NL) text to add new knowledge to KBs at a low cost [4]. However, NL sentences may contain ambiguous entities, such as pronouns; accordingly, the SPO tuples extracted from these NL sentences cannot be expressed as knowledge because they may contain these unclear entities.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, large‐scale KBs require considerable effort to expand new knowledge manually. Therefore, studies have been conducted to extract autonomously subject–predicate–object (SPO) tuples from natural language (NL) text to add new knowledge to KBs at a low cost [4]. However, NL sentences may contain ambiguous entities, such as pronouns; accordingly, the SPO tuples extracted from these NL sentences cannot be expressed as knowledge because they may contain these unclear entities.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, a cluster of mentions referring to the same entity (Obama and he) can be generated from the text “Obama was born in Hawaii in 1961. And he was elected President of the United States 48 years later.” Therefore, CR is an important problem in natural language processing (NLP) tasks such as information extraction [4], machine translation [5], and question answering [6]. We propose CR using a multiple embedding‐based span bidirectional encoder representation from a transformer (CR‐M‐SpanBERT) model.…”
Section: Introductionmentioning
confidence: 99%
“…In the second paper in this special issue [2], “CR‐M‐SpanBERT: Multiple‐embedding‐based DNN Coreference Resolution Using Self‐attention SpanBERT” by Jung, a model is proposed to incorporate multiple embeddings for coreference resolution based on the SpanBERT architecture. The experimental results show that multiple embeddings can improve the coreference resolution performance regardless of the employed baseline model, such as LSTM, BERT, and SpanBERT.…”
mentioning
confidence: 99%