In this paper, we propose, describe, and test a modification of the K-SVD algorithm. Given a set of training data, the proposed algorithm computes an overcomplete dictionary by minimizing the β-divergence (β >= 1) between the data and its representation as linear combinations of atoms of the dictionary, under strict sparsity restrictions. For the special case β = 2, the proposed algorithm minimizes the Frobenius norm and, therefore, for β = 2 the proposed algorithm is equivalent to the original K-SVD algorithm. We describe the modifications needed and discuss the possible shortcomings of the new algorithm.The algorithm is tested with random matrices and with an example based on speech separation.