Abstract. Natural disasters have a significant impact on the environment and economies of all countries around the world, and a large amount of multi-source heterogeneous geographic information data is generated every day. However, due to a lack of knowledge transformation capabilities, these nations continue to struggle with the issue of "a large amount of data and little knowledge". Therefore, it is of great significance how to extract geographic knowledge related to disasters from the vast data and construct a geographic knowledge graph integrating disaster information. Based on the theory related to knowledge extraction, this paper proposes a method to construct a natural disaster knowledge graph integrating geographic information. The core of this knowledge graph is to construct the association relationship between natural disaster concepts, research areas, and spatial data. The vocabulary and relationships associated with disaster concepts are primarily transformed by an existing word list of geographic narratives, which then provide rich semantic relationships of domain concepts for the entire knowledge graph. The research areas and spatial data types are mainly obtained through knowledge entity extraction and disambiguation methods. This disaster knowledge graph can support applications well such as natural disaster visualization and analysis, data recommendation systems, and intelligent Q&A systems, which can further improve the intelligence of natural disaster knowledge services and is expected to promote the sharing and reuse of domain knowledge graphs to a certain extent.
Knowledge graphs are important in human-centered AI because of their ability to reduce the need for large labelled machine-learning datasets, facilitate transfer learning, and generate explanations. However, knowledge-graph construction has evolved into a complex, semi-automatic process that increasingly relies on opaque deep-learning models and vast collections of heterogeneous data sources to scale. The knowledge-graph lifecycle is not transparent, accountability is limited, and there are no accounts of, or indeed methods to determine, how fair a knowledge graph is in the downstream applications that use it. Knowledge graphs are thus at odds with AI regulation, for instance the EU’s upcoming AI Act, and with ongoing efforts elsewhere in AI to audit and debias data and algorithms. This paper reports on work in progress towards designing explainable (XAI) knowledge-graph construction pipelines with human-in-the-loop and discusses research topics in this space. These were grounded in a systematic literature review, in which we studied tasks in knowledge-graph construction that are often automated, as well as common methods to explain how they work and their outcomes. We identified three directions for future research: (i) tasks in knowledge-graph construction where manual input remains essential and where there may be opportunities for AI assistance; (ii) integrating XAI methods into established knowledge-engineering practices to improve stakeholder experience; as well as (iii) evaluating how effective explanations genuinely are in making knowledge-graph construction more trustworthy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.