Proceedings of the Workshop on Human-in-the-Loop Data Analytics 2019
DOI: 10.1145/3328519.3329132
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Graph Programming with a Human-in-the-Loop

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Work in natural language processing (NLP) has studied how entity resolution can be learned and improved over time through user interactions [20]. A number of tools [21], [22] help people model queries into knowledge graphs [23], [24]; however, there are scale limitations for large, complex, and heterogeneous structures [25]. Other automated approaches feature interactive programming interfaces [26], equi-join-able tables [27], and deep learning approaches to entity resolution [28].…”
Section: Data Integrationmentioning
confidence: 99%
“…Work in natural language processing (NLP) has studied how entity resolution can be learned and improved over time through user interactions [20]. A number of tools [21], [22] help people model queries into knowledge graphs [23], [24]; however, there are scale limitations for large, complex, and heterogeneous structures [25]. Other automated approaches feature interactive programming interfaces [26], equi-join-able tables [27], and deep learning approaches to entity resolution [28].…”
Section: Data Integrationmentioning
confidence: 99%
“…It incorporates human feedback at each iteration to improve performance and control the semantic drift using human feedback. Many researchers add humans to natural language processing tasks(such as entity analysis, knowledge graphs, and so on) by using crowdsourcing [54,55,57,59]. Ristoski et al [66] propose a method of extracting instances from various web resources.…”
Section: Data Preprocessingmentioning
confidence: 99%