2018
DOI: 10.3389/frobt.2018.00056
|View full text |Cite
|
Sign up to set email alerts
|

Human-Guided Learning for Probabilistic Logic Models

Abstract: Advice-giving has been long explored in the artificial intelligence community to build robust learning algorithms when the data is noisy, incorrect or even insufficient. While logic based systems were effectively used in building expert systems, the role of the human has been restricted to being a "mere labeler" in recent times. We hypothesize and demonstrate that probabilistic logic can provide an effective and natural way for the expert to specify domain advice. Specifically, we consider different types of a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…There are a number of interesting avenues for future work. Other interactive learning approaches such as coactive (Shivaswamy and others 2015), active imitation (Judah and others 2012), mixed-initiative interactive (Cakmak and others 2011) and guided probabilistic learning (Odom and Natarajan 2018) should be made explanatory. Making deep active learning (Gal and others 2017) explanatory is likely to improve upon the sample complexity of deep learning.…”
Section: Resultsmentioning
confidence: 99%
“…There are a number of interesting avenues for future work. Other interactive learning approaches such as coactive (Shivaswamy and others 2015), active imitation (Judah and others 2012), mixed-initiative interactive (Cakmak and others 2011) and guided probabilistic learning (Odom and Natarajan 2018) should be made explanatory. Making deep active learning (Gal and others 2017) explanatory is likely to improve upon the sample complexity of deep learning.…”
Section: Resultsmentioning
confidence: 99%
“…Since KiGB loss function is w.r.t the qualitative constraint and not tied with the gradient, it is easy to use this approach on any tree based learning methods like a decision tree, random-forests, AdaBoost, relational regression trees etc. As shown by Odom et al 2018 there exists close connection between qualitative constraints and preferences. In specific cases, preferences can be reduced to qualitative constraints and the KiGB framework can be leveraged.…”
Section: Extensionsmentioning
confidence: 89%
“…In specific cases, preferences can be reduced to qualitative constraints and the KiGB framework can be leveraged. Consider the example advice shown in (Odom and Natarajan 2018), if any car passes an agent on the right, then agent 3 https://starling.utdallas.edu/assets/pdfs/KokelAAAI20Sup.pdf should move into the right lane. This advice is represented as a preference rule (r = F, l+, l− ) with preferred label l+ = move right and avoid label l− = stay.…”
Section: Extensionsmentioning
confidence: 99%
See 2 more Smart Citations