The agnostic PAC learning model consists of: a Hypothesis Space H, a probability distribution P , a sample complexity function m H ( , δ) : [0, 1] 2 → Z + of precision and confidence 1 − δ, a finite i.i.d. sample D N , a cost function and a learning algorithm A(H, D N ), which estimates ĥ ∈ H that approximates a target function h ∈ H seeking to minimize out-of-sample error. In this model, prior information is represented by H and , while problem solution is performed through their instantiation in several applied learning models, with specific algebraic structures for H and corresponding learning algorithms. However, these applied models use additional important concepts not covered by the classic PAC learning theory: model selection and regularization. This paper presents an extension of this model which covers these concepts. The main principle added is the selection, based solely on data, of a subspace of H with a VC-dimension compatible with the available sample. In order to formalize this principle, the concept of Learning Space L(H), which is a poset of subsets of H that covers H and satisfies a property regarding the VC dimension of related subspaces, is presented as the natural search space for model selection algorithms. A remarkable result obtained on this new framework are * D. Marcondes has received financial support from CNPq during the development of this paper. J.