Initially discussed are some of Alan Turing's wonderfully profound and influential ideas about mind and mechanism—including regarding their connection to the main topic of the present study, which is within the field of computability-theoretic learning theory. Herein is investigated the part of this field concerned with the algorithmic, trial-and-error inference of eventually correct programs for functions from their data points. As to the main content of this study: in prior papers, beginning with the seminal work by Freivalds
et al.
in 1995, the notion of
intrinsic complexity
is used to analyse the learning complexity of
sets
of functions in a Gold-style learning setting. Herein are pointed out some weaknesses of this notion. Offered is an alternative based on
epitomizing sets
of functions—sets that are learnable under a given learning criterion, but not under other criteria that are not at least as powerful. To capture the idea of epitomizing sets, new reducibility notions are given based on
robust learning
(closure of learning under certain sets of computable operators). Various degrees of epitomizing sets are
characterized
as the sets complete with respect to corresponding reducibility notions! These characterizations also provide an easy method for showing sets to be epitomizers, and they are then employed to prove several sets to be epitomizing. Furthermore, a scheme is provided to generate easily
very strong
epitomizers for a multitude of learning criteria. These strong epitomizers are the so-called
self-learning
sets, previously applied by Case & Kötzing in 2010. These strong epitomizers can be
easily
generated and employed in a myriad of settings to witness
with certainty
the strict separation in learning power between the criteria so epitomized and other not as powerful criteria!