2022
DOI: 10.3233/sw-222925
|View full text |Cite
|
Sign up to set email alerts
|

Question answering with deep neural networks for semi-structured heterogeneous genealogical knowledge graphs

Abstract: With the rising popularity of user-generated genealogical family trees, new genealogical information systems have been developed. State-of-the-art natural question answering algorithms use deep neural network (DNN) architecture based on self-attention networks. However, some of these models use sequence-based inputs and are not suitable to work with graph-based structure, while graph-based DNN models rely on high levels of comprehensiveness of knowledge graphs that is nonexistent in the genealogical domain. Mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 115 publications
(176 reference statements)
0
3
0
Order By: Relevance
“…Pre-trained NLMs have been state-of-thearts for many Natural Language Processing (NLP) tasks. For example, NLMs such as BERT (Devlin et al, 2018) and ALBERT (Lan et al, 2019) demonstrate outstanding performance in some tasks such as answering questions (Clark et al, 2020;Suissa et al, 2023) and computing conditional probabilities of masked words in a sentence (Kwon et al, 2022). Nonetheless, recent research indicates that the size of human-annotated data continues to be a significant factor influencing the performance of models (Gu et al, 2022;Mehrafarin et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Pre-trained NLMs have been state-of-thearts for many Natural Language Processing (NLP) tasks. For example, NLMs such as BERT (Devlin et al, 2018) and ALBERT (Lan et al, 2019) demonstrate outstanding performance in some tasks such as answering questions (Clark et al, 2020;Suissa et al, 2023) and computing conditional probabilities of masked words in a sentence (Kwon et al, 2022). Nonetheless, recent research indicates that the size of human-annotated data continues to be a significant factor influencing the performance of models (Gu et al, 2022;Mehrafarin et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…One of the key AI‐based techniques for textual corpora exploration is natural language question‐answering (QA). Unlike keyword‐based search engines, QA algorithms receive and process natural language questions and produce precise answers rather than long lists of documents that need to be manually scanned (Suissa et al, 2021). While factual question‐answering models aim to answer a question about one (or few) piece(s) of information (e.g., about a specific individual), quantitative question‐answering models aim to answer quantitative questions about a substantial subset of the dataset (e.g., a community or a country) (Suissa et al, 2023).…”
Section: Introductionmentioning
confidence: 99%
“…Indicatively, they managed to predict, with an accuracy of over 92%, the class labels of previously unseen images. [47]. In particular, ti generates text passages from knowledge sub-graphs that contain genealogical data for creating questions and answers and for building a Question Answering system by exploiting deep neural network techniques with the Uncle-BERT model.…”
mentioning
confidence: 99%