Iterative learning is a model of language learning from positive data, due to Wiehagen. When compared to a learner in Gold's original model of language learning from positive data, an iterative learner can be thought of as memorylimited . However, an iterative learner can memorize some input elements by coding them into the syntax of its hypotheses. A main concern of this paper is: to what extent are such coding tricks necessary?One means of preventing some such coding tricks is to require that the hypothesis space used be free of redundancy, i.e., that it be 1-1. In this context, we make the following contributions. By extending a result of Lange & Zeugmann, we show that many interesting and non-trivial classes of languages can be iteratively identified using a Friedberg numbering as the hypothesis space. (Recall that a Friedberg numbering is a 1-1 effective numbering of all computably enumerable sets.) An example of such a class is the class of pattern languages over an arbitrary alphabet. On the other hand, we show that there exists an iteratively identifiable class of languages that cannot be iteratively identified using any 1-1 effective numbering as the hypothesis space.We also consider an iterative-like learning model in which the computational component of the learner is modeled as an enumeration operator , as opposed to a partial computable function. In this new model, there are no hypotheses, and, thus, no syntax in which the learner can encode what elements it has or has not yet seen. We show that there exists a class of languages that can be identified under this new model, but that cannot be iteratively identified. On the other hand, we show that there exists a class of languages that cannot be identified under this new model, but that can be iteratively identified using a Friedberg numbering as the hypothesis space.