The relative importance of nodes in a network can be quantified via functions of the adjacency matrix. Two popular choices of function are the exponential, which is parameter-free, and the resolvent function, which yields the Katz centrality measure. Katz centrality can be the more computationally efficient, especially for large directed networks, and has the benefit of generalizing naturally to time-dependent network sequences, but it depends on a parameter. We give a prescription for selecting the Katz parameter based on the objective of matching the centralities of the exponential counterpart. For our new choice of parameter, the resolvent can be very ill conditioned, but we argue that the centralities computed in floating point arithmetic can nevertheless reliably be used for ranking. Experiments on six real networks show that the new choice of Katz parameter leads to rankings of nodes that generally match those from the exponential centralities well in practice.
Abstract.A new matrix function corresponding to the scalar unwinding number of Corless, Hare, and Jeffrey is introduced. This matrix unwinding function, U , is shown to be a valuable tool for deriving identities involving the matrix logarithm and fractional matrix powers, revealing, for example, the precise relation between log A α and α log A. The unwinding function is also shown to be closely connected with the matrix sign function. An algorithm for computing the unwinding function based on the Schur-Parlett method with a special reordering is proposed. It is shown that matrix argument reduction using the function mod(A) = A−2πi U (A), which has eigenvalues with imaginary parts in the interval (−π, π] and for which e A = e mod(A) , can give significant computational savings in the evaluation of the exponential by scaling and squaring algorithms.
Abstract. Theoretical and computational aspects of matrix inverse trigonometric and inverse hyperbolic functions are studied. Conditions for existence are given, all possible values are characterized, and the principal values acos, asin, acosh, and asinh are defined and shown to be unique primary matrix functions. Various functional identities are derived, some of which are new even in the scalar case, with care taken to specify precisely the choices of signs and branches. New results include a "round trip" formula that relates acos(cos A) to A and similar formulas for the other inverse functions. Key tools used in the derivations are the matrix unwinding function and the matrix sign function. A new inverse scaling and squaring type algorithm employing a Schur decomposition and variable-degree Padé approximation is derived for computing acos, and it is shown how it can also be used to compute asin, acosh, and asinh. In numerical experiments the algorithm is found to behave in a forward stable fashion and to be superior to computing these functions via logarithmic formulas.Key words. matrix function, inverse trigonometric functions, inverse hyperbolic functions, matrix inverse sine, matrix inverse cosine, matrix inverse hyperbolic sine, matrix inverse hyperbolic cosine, matrix exponential, matrix logarithm, matrix sign function, rational approximation, Padé approximation, MATLAB, GNU Octave, Fréchet derivative, condition number AMS subject classifications. 15A24, 65F30DOI. 10.1137/16M10575771. Introduction. Trigonometric functions of matrices play an important role in the solution of second order differential equations; see, for example, [5], [37], and the references therein. The inverses of such functions, and of their hyperbolic counterparts, also have practical applications, but have been less well studied. An early appearance of the matrix inverse cosine was in a 1954 paper on the energy equation of a free-electron model [36]. The matrix inverse hyperbolic sine arises in a model of the motion of rigid bodies, expressed via Moser-Veselov matrix equations [12]. The matrix inverse sine and inverse cosine were used by Al-Mohy, Higham, and Relton [5] to define the backward error in approximating the matrix sine and cosine. Matrix inverse trigonometric and inverse hyperbolic functions are also useful for studying argument reduction in the computation of the matrix sine, cosine, and hyperbolic sine and cosine [7].This work has two aims. The first is to develop the theory of matrix inverse trigonometric functions and inverse hyperbolic functions. Most importantly, we define the principal values acos, asin, acosh, and asinh, prove their existence and uniqueness, and develop various useful identities involving them. In particular, we determine the precise relationship between acos(cosA) and A, and similarly for the other functions. The second aim is to develop algorithms and software for computing acos, asin, acosh, and asinh of a matrix, for which we employ variable-degree Padé approximation to-
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.