Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and "nearly" minimal size, i.e, within a computable function of being purely minimal size. Kinber showed that this parsimony requirement on final programs limits learning power. However, in scientific inference, parsimony is considered highly desirable. A lim-computable function is (by definition) one calculable by a total procedure allowed to change its mind finitely many times about its output. Investigated is the possibility of assuaging somewhat the limitation on learning power resulting from requiring parsimonious final programs by use of criteria which require the final, correct programs to be "not-so-nearly" minimal size, e.g., to be within a lim-computable function of actual minimal size. It is shown that some parsimony in the final program is thereby retained, yet learning power strictly increases. Considered, then, are lim-computable functions as above but for which notations for constructive ordinals are used to bound the number of mind changes allowed regarding the output. This is a variant of an idea introduced by Freivalds and Smith. For this ordinal notation complexity bounded version of lim-computability, the power of the resultant learning criteria form finely graded, infinitely ramifying, infinite hierarchies intermediate between the computable and the lim-computable cases. Some of these hierarchies, for the natural notations determining them, are shown to be optimally tight.
In any learnability setting, hypotheses are conjectured from some hypothesis space. Studied herein are the influence on learnability of the presence or absence of certain control structures in the hypothesis space. First presented are control structure characterizations of some rather specific but illustrative learnability results. The presence of these control structures is thereby shown essential to maintain full learning power. Then presented are the main theorems. Each of these non-trivially characterizes the invariance of a learning class over hypothesis space V and the presence of a particular projection control structure, called proj, in V as: V has suitable instances of all denotational control structures. In a sense, then, proj epitomizes the control structures whose presence needn't help and whose absence needn't hinder learning power.
Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and "nearly" minimal size, i.e, within a computable function of being purely minimal size. Kinber showed that this parsimony requirement on final programs severely limits learning power. Nonetheless, in, for example, scientific inference, parsimony is considered highly desirable. A lim-computable function is (by definition) one computable by a procedure allowed to change its mind finitely many times about its output. Investigated is the possibility of assuaging somewhat the limitation on learning power resulting from requiring parsimonious final programs by use of criteria which require the final, correct programs to be "not-sonearly" minimal size, e.g., to be within a lim-computable function of actual minimal size. It is interestingly shown that some parsimony in the final program is thereby retained, yet learning power strictly increases. Also considered are lim-computable functions as above but for which notations for constructive ordinals are used to bound the number of mind changes allowed regarding the output. This is a variant of an idea introduced by Freivalds and Smith. For this ordinal complexity bounded version of lim-computability, the power of the resultant learning criteria form strict infinite hierarchies intermediate between the computable and the lim-computable cases. Many open questions are also presented.
A generator program for a computable function (by definition) generates an infinite sequence of programs all but finitely many of which compute that function. Machine learning of generator programs for computable functions is studied. To partially motivate these studies, it is shown that, in some cases, interesting global properties for computable functions can be proved from suitable generator programs which can not be proved from any ordinary programs for them. The power (for variants of various learning criteria from the literature) of learning generator programs is compared with the power of learning ordinary programs. The learning power in these cases is also compared to that of learning limiting programs, i.e., programs allowed finitely many mind changes about their correct outputs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.