We introduce an alternative to the notion of 'fast rate' in Learning Theory, which coincides with the optimal error rate when the given class happens to be convex and regular in some sense. While it is well known that such a rate cannot always be attained by a learning procedure (i.e., a procedure that selects a function in the given class), we introduce an aggregation procedure that attains that rate under rather minimal assumptions -for example, that the L q and L 2 norms are equivalent on the linear span of the class for some q > 2, and the target random variable is square-integrable.