2004
DOI: 10.1137/s0895479803434914
|View full text |Cite
|
Sign up to set email alerts
|

Normwise Scaling of Second Order Polynomial Matrices

Abstract: Abstract. We propose a minimax scaling procedure for second order polynomial matrices that aims to minimize the backward errors incurred in solving a particular linearized generalized eigenvalue problem. We give numerical examples to illustrate that it can significantly improve the backward errors of the computed eigenvalue-eigenvector pairs.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
61
0
2

Year Published

2007
2007
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(63 citation statements)
references
References 3 publications
0
61
0
2
Order By: Relevance
“…The objective of such an analysis should be the derivation of exact first-order expressions for the errors in the coefficients of the polynomial matrix due to the perturbations in the analyzed Toeplitz matrices. In [45,46], pre-conditioning techniques to reduce the backward error for the polynomial eigenvalue problem are presented. Similar techniques for the whole polynomial eigenstructure problem are welcome.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The objective of such an analysis should be the derivation of exact first-order expressions for the errors in the coefficients of the polynomial matrix due to the perturbations in the analyzed Toeplitz matrices. In [45,46], pre-conditioning techniques to reduce the backward error for the polynomial eigenvalue problem are presented. Similar techniques for the whole polynomial eigenstructure problem are welcome.…”
Section: Discussionmentioning
confidence: 99%
“…Given a first approximation of the structure at infinity or the null-space of A(s), we can think in some kind of iterative refinement over some manifold where this structure is invariant, and hence over which the problem is well-posed. Another possible way to improve the conditioning (posedness) of the matrix can be a scaling along the lines in [45,46]. The extension of all these results to the polynomial matrix eigenstructure problem is however out of the scope of this paper.…”
Section: Accuracy and Row Pivotingmentioning
confidence: 99%
“…, k, then κ L (x) ≈ κ P (λ ) and the upper bound in (25) will be of order 1; this suggests that scaling the polynomial eigenproblem to try to achieve this condition before computing the eigenpairs via a Frobenius companion linearization could be numerically advantageous. Fan, Lin, and Van Dooren [27] considered the following scaling strategy for quadratics, which converts P(λ ) = λ 2 A 2 + λ A 1 + A 0 to P(µ) = µ 2 A 2 + µ A 1 + A 0 , where λ = γ µ, P(λ )δ = µ 2 (γ 2 δ A 2 ) + µ(γδ A 1 ) + δ A 0 ≡ P(µ), and is dependent on two nonzero scalar parameters γ and δ . They showed that when A 0 and A 2 are nonzero, γ = A 0 2 / A 2 2 and δ = 2/( A 0 2 + A 1 2 γ) solves the problem of minimizing the maximum distance of the coefficient matrix norms from 1: min γ,δ max{ A 0 2 − 1, A 1 2 − 1, A 2 2 − 1}.…”
Section: Impact On Numerical Practicementioning
confidence: 99%
“…There it has been shown (e.g., [30]) that some examples are not solvable without certain scaling of the coefficient matrices. If we scale M , D, and K such that their norms are essentially the same following [31], then a restriction of the magnitude of p helps to reestablish the numerical accuracy in finite arithmetics.…”
Section: Scaling the Polynomial Matricesmentioning
confidence: 99%
“…On the other hand the scaling proposed in [31] aims at optimization of the numerical properties of the 2 × 2 block matrix in the equivalent standard eigenvalue problem. Here we need to scale the polynomial matrix itself, such that its evaluation is numerically more robust.…”
Section: Scaling the Polynomial Matricesmentioning
confidence: 99%