Delta lenses are an established mathematical framework for modelling and designing bidirectional model transformations (Bx). Following the recent observations by Fong et al, the paper extends the delta lens framework with a a new ingredient: learning over a parameterized space of model transformations seen as functors. We will define a notion of an asymmetric learning delta lens with amendment (ala-lens), and show how ala-lens can be organized into a symmetric monoidal category. We also show that sequential and parallel composition of well-behaved (wb) ala-lenses is also wb so that wb ala-lenses constitute a full subcategory of ala-lenses. 4 Formally, schema S A is a graph consisting of three arrows named Name, Expr., Depart., having the common source named OID and the targets String String String, Integer Integer Integer, String String String resp. This graph freely generates a category (just add four identity arrows) that we denote by S A again. We assume that a general model of such a schema is a functor X: S A → Rel Rel Rel that maps arrows to relations. If we need some of these relations to be functions, we label the arrows in the schema with a special constraint symbol, say, [fun], so that schema becomes a generalized sketch in the sense of Makkai (see [10,11]). In S A , all three arrows are labelled by [fun] so that a legal model must map them to functions. For example, model A in the figure is given by functor _ A : S A → Rel Rel Rel with the following values: OID A = {#A, #J, #M }, sets String String String A and Integer Integer Integer A actually do not depend on A-they are the predefined sets of strings and integers resp., and Name A (#A) = Ann, Name A (#J) = John, Expr. A (#A) = 10, etc.