An iterative differential-difference method for solving nonlinear least squares problems is proposed and studied. The method uses the sum of the derivative of the differentiable part of the operator and the divided difference of the nondifferentiable part instead of computing Jacobian. We prove the local convergence of the proposed method and compute its convergence rate. Finally, we carry out numerical experiments on a set of test problems.1. Introduction. Nonlinear least squares problem often arise while solving overdetermined systems of nonlinear equations, estimating parameters of physical processes by measurement results, constructing nonlinear regression models for solving engineering problems, etc. Effective methods for solving nonlinear least squares problems are the Gauss-Newton method and its modifications ([1, 4, 5, 6, 7]). However, in practice, calculation of derivatives could either be very difficult or impossible. For instance, functions can be too complex, then derivatives may be computed approximately, or only values of functions are given (obtained from experiments) at certain points but it is known that those functions are nonlinear. Hence, one can use the iterative-difference methods ([1, 2, 8, 13]) that do not require calculation of derivatives and yet approach the Gauss-Newton method in terms of the convergence rate and the number of iterations.In case when the nonlinear function has a differentiable and a nondifferentiable parts, one can employ iterative-difference methods from [1,2,8,13]. However, one would preferably build iterative methods that take into account properties of the problem to solve. This is the approach we would like to follow here. In particular, we can use only the derivative of the differentiable operator instead of the full Jacobian, which, in fact, does not exist. In general, the methods obtained using this approach converge slowly. There are some good and efficient examples [1,3,9,14] that use a sum of the derivative of the differentiable part of the operator and the divided difference of the nondifferentiable part instead of the Jacobian, however for solving nonlinear equations. In this work, we propose to follow this approach and design a novel combined method for solving nonlinear least squares problems. This method is based on the Gauss-Newton method, which is employed for the differentiable part of the operator, and the Secant type's method, which relies upon divided differences for the nondifferentiable part. We study the local convergence of this method under the classic and generalized Lipschitz conditions. In the latter, we use some positive integrable