Summary. In this paper a new class of quasi-Newton methods, named LQN, is introduced in order to solve unconstrained minimization problems. The novel approach, which generalizes classical BF GS methods, is based on a Hessian updating formula involving an algebra L of matrices simultaneously diagonalized by a fast unitary transform. The complexity per step of LQN methods is O(n log n), thereby improving considerably BF GS computational efficiency. Moreover, since LQN's iterative scheme utilizes single-indexed arrays, only O(n) memory allocations are required. Global convergence properties are investigated. In particular a global convergence result is obtained under suitable assumptions on f . Numerical experiences [7] confirm that LQN methods are particularly recommended for large scale problems.
In this paper, we present a new class of quasi-Newton methods for an effective learning in large multilayer perceptron (MLP)-networks. The algorithms introduced in this work, named LQN, utilize an iterative scheme of a generalized BFGS-type method, involving a suitable family of matrix algebras L. The main advantages of these innovative methods are based upon the fact that they have an O(nlogn) complexity per step and that they require O(n) memory allocations. Numerical experiences, performed on a set of standard benchmarks of MLP-networks, show the competitivity of the LQN methods, especially for large values of n.
The theory and the practice of optimal preconditioning in solving a linear system by iterative processes is founded on some theoretical facts understandable in terms of a class V of spaces of matrices including diagonal algebras and group matrix algebras. The V-structure lets us extend some known crucial results of preconditioning theory and obtain some useful information on the computability and on the efficiency of new preconditioners. Three preconditioners not yet considered in literature, belonging to three corresponding algebras of V, are analyzed in detail. Some experimental results are included
In this paper we study adaptive L (k) QN methods, involving special matrix algebras of low complexity, to solve general (non-structured) unconstrained minimization problems. These methods, which generalize the classical BFGS method, are based on an iterative formula which exploits, at each step, an ad hoc chosen matrix algebra L (k) . A global convergence result is obtained under suitable assumptions on f .
SUMMARYStructured matrix algebras L and a generalized BFGS-type iterative scheme have been recently investigated to introduce low-complexity quasi-Newton methods, named LQN, for solving general (non-structured) minimization problems. In this paper we introduce the L k QN methods, which exploit ad hoc algebras at each step. Since the structure of the updated matrices can be modiÿed at each iteration, the new methods can better ÿt the Hessian matrix, thereby improving the rate of convergence of the algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.