1992
DOI: 10.1007/bf00992863
|View full text |Cite
|
Sign up to set email alerts
|

Abductive explanation-based learning: A solution to the multiple inconsistent explanation problem

Abstract: Abstract. One problem which frequently surfaces when applying explanation-based learning (EBL) to imperfect theories is the multiple inconsistent explanation problem. The multiple inconsistent explanation problem occurs when a domain theory produces multiple explanations for a training instance, only some of which are correct. Domain theories which suffer from the multiple inconsistent explanation problem can occur in many different contexts, such as when some information is missing and must be assumed: since … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

1992
1992
2009
2009

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…In this paper, three currently active research programs in machine learning (integrating multiple classifiers (Chan and Stolfo, 1995;Breiman, 1996aBreiman, , 1996bTing, 1996;Drucker, 1997;Ting and Witten, 1997), theory revision (Bergadano and Giordana, 1988;Flann and Dietterich, 1989;Towell et al, 1990;Ourston, 1991;Cohen, 1992;Baffes and Mooney, 1993;Michalski, 1993;Mooney, 1993;Schaffer, 1993;Koppel et al, 1994;Richards and Mooney, 1995) and bias selection (Merz, 1998;Merz, 1995;Ho et al, 1994;Brodley, 1993)) are viewed from a single perspective. The goal of integrating multiple classifiers is to improve the performance and scalability of learning algorithms by generating multiple classifiers, running them on distributed systems, and combining their results.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, three currently active research programs in machine learning (integrating multiple classifiers (Chan and Stolfo, 1995;Breiman, 1996aBreiman, , 1996bTing, 1996;Drucker, 1997;Ting and Witten, 1997), theory revision (Bergadano and Giordana, 1988;Flann and Dietterich, 1989;Towell et al, 1990;Ourston, 1991;Cohen, 1992;Baffes and Mooney, 1993;Michalski, 1993;Mooney, 1993;Schaffer, 1993;Koppel et al, 1994;Richards and Mooney, 1995) and bias selection (Merz, 1998;Merz, 1995;Ho et al, 1994;Brodley, 1993)) are viewed from a single perspective. The goal of integrating multiple classifiers is to improve the performance and scalability of learning algorithms by generating multiple classifiers, running them on distributed systems, and combining their results.…”
Section: Introductionmentioning
confidence: 99%
“…Various methods were used to allow to learn the optimal opening bid: abductive explanation-based learning [48], artificial neural networks [49], [50], probabilistic neural networks with evolutionary programming-based clustering technique [51], or roughfuzzy set theory [52].…”
Section: A the Bidding Phasementioning
confidence: 99%
“…One of the earliest uses of EBL techniques to guide concept induction in structured domains was through theory specialization [14,6]. These systems utilize background knowledge represented as an overly-genera] domain theory.…”
Section: Theory Specializationmentioning
confidence: 99%