Loops are a fundamental control structure in programming languages. Being able to analyze, to transform, to optimize loops is a key feature for compilers to handle repetitive schemes with a complexity proportional to the program size and not to the number of operations it describes. This is true for the generation of optimized software as well as for the generation of hardware, for both sequential and parallel execution. The goal of this talk is to recall one of the most important theory to understand loops -the decomposition of Karp, Miller, and Winograd (1967) for systems of uniform recurrence equations -and its connections with two different developments on loops: the theory of transformation and parallelization of (nested) DO loops and the theory of ranking functions for proving the termination of (imperative) programs with WHILE loops. Other connections, which will not be covered, include reachability problems in vector addition systems and Petri nets.