Concurrency, the art of doing many things at the same time is slowly becoming a science. It is very difficult to master, yet it arises all over modern computing systems, both when the communication medium is shared memory and when it is by message passing. Concurrent programming is hard because it requires to cope with many possible, unpredictable behaviors of communicating processes interacting with each other. Right from the start in the 1960s, the main way of dealing with concurrency has been by reduction to sequential reasoning. We trace this history, and illustrate it through several examples, from early ideas based on mutual exclusion, passing through consensus and concurrent objects, until today ledgers and blockchains. We conclude with a discussion on the limits that this approach encounters, related to fault-tolerance, performance, and inherently concurrent problems..