Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/331
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge-Based Regularization in Generative Modeling

Abstract: Prior domain knowledge can greatly help to learn generative models. However, it is often too costly to hard-code prior knowledge as a specific model architecture, so we often have to use general-purpose models. In this paper, we propose a method to incorporate prior knowledge of feature relations into the learning of general-purpose generative models. To this end, we formulate a regularizer that makes the marginals of a generative model to follow prescribed relative dependence of features. It can be in… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 5 publications
0
8
0
Order By: Relevance
“…Several studies use prior knowledge on the input feature space for regularizing models. For example, they use the similarity between features (Krupka and Tishby 2007;Mollaysa, Strasser, and Kalousis 2017;Takeishi and Kawahara 2021) and relevant features for prediction per labeled training instance (Zaidan, Eisner, and Piatko 2007;Rieger et al 2020;Du et al 2019). These studies have shown the effectiveness of using prior knowledge on the input feature space.…”
Section: Related Workmentioning
confidence: 99%
“…Several studies use prior knowledge on the input feature space for regularizing models. For example, they use the similarity between features (Krupka and Tishby 2007;Mollaysa, Strasser, and Kalousis 2017;Takeishi and Kawahara 2021) and relevant features for prediction per labeled training instance (Zaidan, Eisner, and Piatko 2007;Rieger et al 2020;Du et al 2019). These studies have shown the effectiveness of using prior knowledge on the input feature space.…”
Section: Related Workmentioning
confidence: 99%
“…They highlight the difficulty in eliciting expert knowledge from people but their technique is similar to the other works presented here in that the knowledge loss is still represented as an additive term to the standard network loss. Takeishi and Kawahara (2020) present an example of how the knowledge of relations of objects can be used to regularise a generative model. Again, the solution involves appending terms to the loss function, but they demonstrate that relational information can aid a learning algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…However, these approaches do not always guarantee the correct identification of the mechanistic part, and the outcomes depend on the specific regularization term used [28]. To the best of our knowledge, the identifiability analysis of the mechanistic parameters in a HNODE model has not been investigated in the literature so far.…”
Section: Introductionmentioning
confidence: 99%
“…Firstly, while model calibration in mechanistic models usually relies on global optimization techniques to explore the parameter search space [11], training HNODE models necessitates the use of local and gradient-based methods [25]. Secondly, incorporating a universal approximator, such as a neural network, into a dynamical model may compromise the identifiability of the HNODE mechanistic components [28, 29]. In this sense, the existing literature has focused on trying to enforce the identifiability of the mechanistic parameters within a HNODE by integrating a regularization term into the cost function [30].…”
Section: Introductionmentioning
confidence: 99%