We introduce an error resilient distributed computing method based on an extension of the channel polarization phenomenon to distributed algorithms. The method leverages an algorithmic split operation that transforms two identical compute nodes to slow and fast workers, which parallels the channel split operation in Polar Codes. This operation preserves the average runtime, analogous to the conservation of Shannon capacity in channel polarization.By leveraging a recursive construction in a similar spirit to the Fast Fourier Transform, this method synthesizes virtual compute nodes with dispersed return time distributions, which we call computational polarization. We show that the runtime distributions form a functional martingale processes, identify their limiting distributions in closedform expressions together with non-asymptotic convergence rates, and prove strong convergence results in Banach spaces. We provide an information-theoretic lower bound on the overall runtime of any coded computation method and show that the computational polarization approach asymptotically achieves the optimal runtime for computing linear functions. An important advantage is the near linear time decoding procedure, which is significantly cheaper than Maximum Distance Separable codes.
IndexTerms error correcting codes, Polar codes, coding for computation, random processes, martingales in Banach spaces I. INTRODUCTION As a result of the recent growth of data, the computing paradigm has shifted into massively distributed computing systems. Several distributed architectures and software frameworks have been developed for large scale computational problems. Notable examples include the open source distributed computing framework Apache Spark [1] and