We consider a walker on the line that at each step keeps the same direction with a probability which depends on the discrete time already spent in the direction the walker is currently moving. More precisely, the associated left-infinite sequence of jumps is supposed to be a Variable Length Markov Chain (VLMC) built from a probabilized context tree given by a double-infinite comb. These walks with memories of variable length can be seen as generalizations of Directionally Reinforced Random Walks (DRRW) introduced in [1, Mauldin & al., Adv. Math., 1996] in the sense that the persistence times are anisotropic. We give a complete characterization of the recurrence and the transience in terms of the probabilities to persist in the same direction or to switch. We point out that the underlying VLMC is not supposed to admit any stationary probability. Actually, the most fruitful situations emerge precisely when there is no such invariant distribution. In that case, the recurrent and the transient property are related to the behaviour of some embedded random walk with an undefined drift so that the asymptotic behaviour depends merely on the asymptotics of the probabilities of change of directions unlike the other case in which the criterion reduces to a drift condition. Finally, taking advantage of this flexibility, we give some (possibly random and lacunar) perturbations results and treat the case of more general probabilized context trees built by grafting subtrees onto the double-infinite comb.