2022
DOI: 10.1038/s41598-021-04590-0
|View full text |Cite
|
Sign up to set email alerts
|

A review of some techniques for inclusion of domain-knowledge into deep neural networks

Abstract: We present a survey of ways in which existing scientific knowledge are included when constructing models with neural networks. The inclusion of domain-knowledge is of special interest not just to constructing scientific assistants, but also, many other areas that involve understanding data using human-machine collaboration. In many such instances, machine-based model construction may benefit significantly from being provided with human-knowledge of the domain encoded in a sufficiently precise form. This paper … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
46
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 95 publications
(46 citation statements)
references
References 75 publications
0
46
0
Order By: Relevance
“…Other proposals set out from deep learning architectures and add "logic" components to it. According to Dash et al [68], this can be done in a variety of ways, (a) by transforming data; (b) by transforming the loss function informed by a domain model; and (c) by transforming the model, e.g., by modeling logic operators within the network itself. All these methods appear generic enough to be applicable to movement analytics.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Other proposals set out from deep learning architectures and add "logic" components to it. According to Dash et al [68], this can be done in a variety of ways, (a) by transforming data; (b) by transforming the loss function informed by a domain model; and (c) by transforming the model, e.g., by modeling logic operators within the network itself. All these methods appear generic enough to be applicable to movement analytics.…”
Section: Discussionmentioning
confidence: 99%
“…Logic can be a helpful mechanism to capture domain knowledge in deep learning architectures. Dash et al [68] Wan and Song [251] falls in category (a) and present an approach to add a set of auxiliary inputs to help interpret the outcome of a neural network. Input data is passed through a neural network to generate the auxiliary inputs to the next network.…”
Section: Logic and Deep Learningmentioning
confidence: 99%
“…In this section, we position SPLs against state-of-the-art approaches for enforcing constraints on neural network predictions. In-depth surveys on this topic can be found in [18] and [33].…”
Section: Related Workmentioning
confidence: 99%
“…Loss-based methods. A prominent strategy consists of penalizing the network for producing inconsistent predictions using an auxiliary loss [18,33]. While popular, loss-based methods, however cannot guarantee that the predictions will be consistent at test time.…”
Section: Related Workmentioning
confidence: 99%
“…g A substantial growth in incorporating knowledge with deep learning has pursued. 3 Humans are capable of handling information at various levels of abstraction, which corresponds to different levels of learning. A knowledge-driven approach in AI should explore stratified knowledge manifested in various human-curated knowledge sources, generalpurpose or domain-specific, lexical or graph-based to find connections between facts and observations, yielding outcomes that humans can relate with their understanding and reason over the model.…”
mentioning
confidence: 99%