Explanation-based learning (EBL) is a technique by which an intelligent system can learn by observing examples. EBL systems are characterized by the ability to create justified generalizations from single training instances. They are also distinguished by their reliance on background knowledge of the domain under study. Although EBL is usually viewed as a method for performing generalization, it can be viewed in other ways as well. In particular, EBL can be seen as a method that performs four different learning tasks: generalization, chunking, operationalization, and analogy. This paper provides a general introduction to the field of explanation-based learning. Considerable emphasis is placed on showing how EBL combines the four learning tasks mentioned above. The paper begins with a presentation of an intuitive example of the EBL technique. Subsequently EBL is placed in its historical context and the relation between EBL and other areas of machine learning is described. The major part of this paper is a survey of selected EBL programs, which have been chosen to show how EBL manifests each of the four learning tasks. Attempts to formalize the EBL technique are also briefly discussed. The paper concludes with a discussion of the limitations of EBL and the major open questions in the field.
Gradient-based numerical optimization of complex engineering designs offers the promise of rapidly producing better designs. However, such methods generally assume that the objective function and constraint functions are continuous, smooth, and defined everywhere. Unfortunately, realistic simulators tend to violate these assumptions, making optimization unreliable. Several decisions that need to be made in setting up an optimization, such as the choice of a starting prototype and the choice of a formulation of the search space, can make a difference in the reliability of the optimization. Machine learning can improve gradient-based methods by making these choices based on the results of previous optimizations. This paper demonstrates this idea by using machine learning for four parts of the optimization setup problem: selecting a starting prototype from a database of prototypes, synthesizing a new starting prototype, predicting which design goals are achievable, and selecting a formulation of the search space. We use standard tree-induction algorithms (C4.5 and CART). We present results in two realistic engineering domains: racing yachts and supersonic aircraft. Our experimental results show that using inductive learning to make setup decisions improves both the speed and the reliability of design optimization.
The first step f o r most case-based design systems as to select an initial prototype f r o m a database of previous designs. T h e retrieved prototype is then modified t o tailor it t o the given goals. For any particular design goal the selection of a starting point f o r the design process can have a dramatic effect both on the quality of the eventual design and on the overall design time. W e present a technique for automatically constructing effective prototype-selection rules. Our technique applies a standard inductive-learning algorithm, C4.5, t o a set of training data describing which particular prototype would have been the best choice f o r each goal encountered in a previous design session. W e have tested our technique in the domain of racing-yacht-hull design, comparing our inductively learned selection rules t o several competing prototypeselection methods.Our results show that the inductive prototype-selection method leads to better final designs when the design process is guided by a noisy evaluation function, and that the inductively learned rules will often be more efficient than competing methods. 1: IntroductionMany automated design systems begin by retrieving an initial prototype from a library of previous designs, using the given design goal as an index to guide the retrieval process [14]. The retrieved prototype is then modified by a set of design modification operators to tailor the selected design to the given goals. In many cases the quality of competing designs can be assessed using domain-specific evaluation functions, and in such cases the design-modification process is often This research has benefited from numerous discussions with members of the Rutgers CAP project. We thank Andrew Gelsey for helping with the cross-validation code, John Keane for helping with RUVPP, and Andrew Gelsey and Tim Weinrich for comments on a previous draft of this paper. This research was supported under ARPA-funded NASA grant NAG 2-645.accomplished by an optimization method such as hillclimbing search [12, 21. Such a design system can be seen as a case-based reasoning system [4], in which the prototype-selection method is the indezing process, and the optimization method is the adaptation process.In the context of such case-based design systems, the choice of an initial prototype can affect both the quality of the final design and the computational cost of obtaining that design, for three reasons. First, prototype selection may impact quality when the prototypes lie in disjoint search spaces. In particular, if the system's design modification operators cannot convert any prototype into any other prototype, the choice of initial prototype will restrict the set of possible designs that can be obtained by any search process. A poor choice of initial prototype may therefore lead to a suboptimal final design. Second, prototype selection may impact quality when the design process is guided by a nonlinear evaluation function with unknown global properties. Since there is no known method that is guaranteed to ...
Numerical design optimization algorithms are highly sensitive to the particular formulation of the optimization problems they are given. The formulation of the search space, the objective function and the constraints will generally have a large impact on the duration of the optimization process as well as the quality of the resulting design. Furthermore, the best formulation will vary from one application domain to another, and from one problem to another within a given application domain. Unfortunately, a design engineer may not know the best formulation in advance of attempting to set up and run a design optimization process. In order to attack this problem, we have developed a software environment that supports interactive formulation, testing and reformulation of design optimization strategies. Our system represents optimization strategies in terms of second-order data ow graphs. Reformulations of strategies are implemented a s t r ansformations between data ow graphs. The system permits the user to interactively generate and searc h a s p ace of design optimization strategies, and experimentally evaluate their performance on test problems, in order to nd a strategy that is suitable for his application domain. The system has been implemented in a domain independent fashion, and is being tested in the domain of racing yacht design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.