Concurrency in software engineering means the collection of techniques and mechanisms that enable a computer program to perform several different tasks simultaneously, or apparently simultaneously. The need for concurrency in software first arose in the very early days of computing. Although early computers were very much slower than modern machines, their peripheral devices (such as paper tape or punched‐card readers for input, and teletypewriters for output) were much slower still, and it was soon recognized that many programs could be made to run very much faster if they did not spend so much time waiting for input/output (I/O). The solution is easy in principle—simply allow I/O transfers to be performed concurrently with each other and with normal computation (i.e., computation involving only the central processor). To achieve this, new features needed to be added to both hardware and software.
Today, such features—concurrency mechanisms, the most important of which are discussed in this article—are commonplace and form a vital part of all modern computer systems. The operating system of any computer (e.g., Windows, UNIX, Linux) makes use of concurrency to enable the user to do several different things simultaneously or to allow many different users to access the computer at the same time. Many operating systems allow a large number of tasks to be run concurrently and the simultaneous use of peripheral devices. The earliest applications of concurrency were in real‐time systems such as operating systems, in which the software must perform actions at times determined by external events.