Abstract-This paper studies a flexible algorithm for minimizing a sum of component functions, each of which depends on a large number of decision variables. Such formulations appear naturally in "big data" applications, where each function describes the loss estimated using the data available at a specific machine, and the number of features under consideration is huge. In our algorithm, a coordinator updates a global iterate based on delayed partial gradients of the individual objective functions with respect to blocks of coordinates. Delayed incremental gradient and delayed coordinate descent algorithms are obtained as special cases. Under the assumption of strong convexity and block coordinate-wise Lipschitz continuous partial gradients, we show that the algorithm converges linearly to a ball around the optimal value. Contrary to related proposals in the literature, our algorithm is delay-insensitive: it converges for any bounded information delay, and its step-size parameter can be chosen independently of the maximum delay bound.