We consider a model of teaching in which the learners are consistent and have bounded state, but are otherwise arbitrary. The teacher is non-interactive and "massively open": the teacher broadcasts a sequence of examples of an arbitrary target concept, intended for every possible on-line learning algorithm to learn from. We focus on the problem of designing interesting teachers: efficient sequences of examples that lead all capable and consistent learners to learn concepts, regardless of the underlying algorithm used by the learner. We use two measures of teaching efficiency: the number of mistakes made by the worst-case learner, and the maximum length of the example sequence needed for the worst-case learner. Our results are summarized as follows:• Given a uniform random sequence of examples of an nbit concept function, learners (capable of consistently learning the concept) with s(n) bits of state are guaranteed to make only O(n · s(n)) mistakes and exactly learn the concept, with high probability. This theorem has interesting corollaries; for instance, every concept c has a sequence of examples can teach c to all capable consistent on-line learners implementable with s(n)-size circuits, such that every learner makes onlyÕ(s(n) 2 ) mistakes. That is, all resourcebounded algorithms capable of consistently learning a concept can be simultaneously taught that concept with few mistakes, on a single example sequence.We also show how to efficiently generate such a sequence of examples on-line: using Nisan's pseudorandom generator, each example in the sequence can be generated with polynomial-time overhead per example, with an O(n · s(n))-bit initial seed.• To justify our use of randomness, we prove that any non-trivial derandomization of our sequences would imply new circuit lower bounds. For instance, if there is a deterministic 2 n O(1) time algorithm that generates a sequence of examples, such that all consistent and capable polynomialsize circuit learners learn the all-zeroes concept with less than 2 n mistakes, then EXP ⊂ P/poly.• We present examples illustrating that the key differences in our model -our focus on mistakes rather than the total number of examples, and our use of a state bound -must be considered together to obtain our results.• We show that for every consistent s(n)-state bounded learner A, and every n-bit concept that A is capable of learning, there is a custom "tutoring" sequence of only O(n · s(n)) examples that teaches A the concept. That is, in principle, there are no slow learners, only bad teachers: if a statebounded learner is capable of learning a concept at all, then it can always be taught that concept quickly via some short sequence of examples.