“…The technical challenges have been overcome thanks to a lovely application of the classical and extremely useful triviality known as the Markov inequality (see, for instance, [8]) and enhancing the methodology for deriving the Geiringer-like results in [9] through exploiting a slightly extended lumping quotients of Markov chains technique that has been successfully developed, improved and exploited to estimate stationary distributions of Markov chains modeling EAs in a series of articles: [10], [11], [12] and [13]. Finally, the later version of the theorem has been further generalized to allow recombination over arbitrary set covers, rather than being limited to equivalence relations using the particular case in [7] in conjunction with the tools mentioned above in [14]. While the original purpose of the last two finite-population Geiriger-like theorems is taking advantage of the intrinsic similarities within the state-action set encountered by a learning agent to evaluate actions with the aim of selecting an optimal one, the parallel algorithms on the evolving digraph where the nodes are states and actions are edges from a current state to the one obtained upon executing an action, that are motivated by the theorems exhibit similarities to the way Hebbian learning takes place in biological neural networks that is further supported by Andree Ehresmann's category-theoretic model of cognitive processes called a "Memory Evolutive System" (see, for instance, [15], [16], [17], [18] and many more related articles).…”