Abstract. In this paper we study an algorithm for solving a minimization problem composed of a differentiable (possibly non-convex) and a convex (possibly non-differentiable) function. The algorithm iPiano combines forward-backward splitting with an inertial force. It can be seen as a non-smooth split version of the Heavy-ball method from Polyak. A rigorous analysis of the algorithm for the proposed class of problems yields global convergence of the function values and the arguments. This makes the algorithm robust for usage on non-convex problems. The convergence result is obtained based on the Kurdyka-Lojasiewicz inequality. This is a very weak restriction, which was used to prove convergence for several other gradient methods. First, an abstract convergence theorem for a generic algorithm is proved, and, then iPiano is shown to satisfy the requirements of this theorem. Furthermore, a convergence rate is established for the general problem class. We demonstrate iPiano on computer vision problems: image denoising with learned priors and diffusion based image compression.Key words. non-convex optimization, Heavy-ball method, inertial forward-backward splitting, Kurdyka-Lojasiewicz inequality, proof of convergence 1. Introduction. The gradient method is certainly one of the most fundamental but also one of the most simple algorithms to solve smooth convex optimization problems. In the last decades, the gradient method has been modified in many ways. One of those improvements is to consider so-called multi-step schemes [38,35]. It has been shown that such schemes significantly boost the performance of the plain gradient method. Triggered by practical problems in signal processing, image processing and machine learning, there has been an increased interest in so-called composite objective functions, where the objective function is given by the sum of a smooth function and a non-smooth function with an easy to compute proximal map. This initiated the development of the so-called proximal gradient or forward-backward method [28], that combines explicit (forward) gradient steps w.r.t. the smooth part with proximal (backward) steps w.r.t. the non-smooth part.In this paper, we combine the concepts of multi-step schemes and the proximal gradient method to efficiently solve a certain class of non-convex, non-smooth optimization problems. Although, the transfer of knowledge from convex optimization to non-convex problems is very challenging, it aspires to find efficient algorithms for certain non-convex problems. Therefore, we consider the subclass of non-convex problems