The impact of this paper is twofold. First, we study convergence rates of the iteratively regularized Gauss-Newton (IRGN) algorithm with a linear penalty term under a generalized source assumption and show how the regularizing properties of new iterations depend on the solution smoothness. Secondly, we introduce an adaptive IRGN procedure, which is investigated under a relaxed smoothness condition. The introduction and analysis of a more general penalty term are of great importance since, apart from bringing stability to the numerical scheme designed for solving a large class of applied inverse problems, it allows us to incorporate various types of a priori information available on the model. Both a priori and a posteriori stopping rules are investigated. For the a priori stopping rule, optimal convergence rates are derived. A numerical example illustrating convergence rates is considered.