31 results on '"Tisseur, Francoise"'
Search Results
2. The role of topology and mechanics in uniaxially growing cell networks
- Author
-
Erlich, Alexander, Jones, Gareth W., Tisseur, Françoise, Moulton, Derek E., and Goriely, Alain
- Published
- 2020
3. Min-max elementwise backward error for roots of polynomials and a corresponding backward stable root finder
- Author
-
Tisseur, Françoise and Van Barel, Marc
- Published
- 2021
- Full Text
- View/download PDF
4. Polynomial eigenvalue solver based on tropically scaled Lagrange linearization
- Author
-
Van Barel, Marc and Tisseur, Françoise
- Published
- 2018
- Full Text
- View/download PDF
5. The Quadratic Eigenvalue Problem
- Author
-
Tisseur, Françoise and Meerbergen, Karl
- Published
- 2001
6. On the sign characteristics of Hermitian matrix polynomials
- Author
-
Mehrmann, Volker, Noferini, Vanni, Tisseur, Françoise, and Xu, Hongguo
- Published
- 2016
- Full Text
- View/download PDF
7. Triangularizing matrix polynomials
- Author
-
Taslaman, Leo, Tisseur, Françoise, and Zaballa, Ion
- Published
- 2013
- Full Text
- View/download PDF
8. Standard triples of structured matrix polynomials
- Author
-
Al-Ammari, Maha and Tisseur, Françoise
- Published
- 2012
- Full Text
- View/download PDF
9. Hermitian quadratic matrix polynomials: Solvents and inverse problems
- Author
-
Lancaster, Peter and Tisseur, Françoise
- Published
- 2012
- Full Text
- View/download PDF
10. Hermitian matrix polynomials with real eigenvalues of definite type. Part I: Classification
- Author
-
Al-Ammari, Maha and Tisseur, Françoise
- Published
- 2012
- Full Text
- View/download PDF
11. Max-balanced Hungarian scalings
- Author
-
Hook, James, Pestana, Jennifer, Tisseur, Francoise, and Hogg, Jonathan
- Subjects
linear systems of equations ,Hungarian scaling ,sparse matrices ,incomplete LU preconditioner ,max-plus algebra ,max-balancing ,QA ,diagnonal dominance ,MathematicsofComputing_DISCRETEMATHEMATICS ,diagonal scaling - Abstract
A Hungarian scaling is a diagonal scaling of a matrix that is typically applied along 5 with a permutation to a sparse linear system before calling a direct or iterative solver. A matrix that 6 has been Hungarian scaled and reordered has all entries of modulus less than or equal to 1 and entries 7 of modulus 1 on the diagonal. An important fact that has been largely overlooked by the previous8 research into Hungarian scaling of linear systems is that a given matrix typically has a range of 9 possible Hungarian scalings and direct or iterative solvers may behave quite differently under each of 10 these scalings. Since standard algorithms for computing Hungarian scalings return only one scaling, 11 it is natural to ask whether a superior performing scaling can be obtained by searching within the 12 set of all possible Hungarian scalings. To this end we propose a method for computing a Hungarian 13 scaling that is optimal from the point of view of a measure of diagonal dominance. Our method uses 14 max-balancing, which minimizes the largest off-diagonal entries in the scaled and permuted matrix.15 Numerical experiments illustrate the increased diagonal dominance produced by max-balanced Hun-16 garian scaling as well as the reduced need for row interchanges in Gaussian elimination with partial17 pivoting and the improved stability of LU factorizations without pivoting. We additionally find that18 applying the max-balancing scaling before computing incomplete LU preconditioners improves the 19 convergence rate of certain iterative methods. Our numerical experiments also show that the Hun-20 garian scaling returned by the HSL code MC64 has performance very close to that of the optimal21 max-balanced Hungarian scaling, which further supports the use of this code in practice.
- Published
- 2019
12. Compact Rational Krylov Methods for Nonlinear Eigenvalue Problems
- Author
-
Lietaert, Pieter, Meerbergen, Karl, and Tisseur, Francoise
- Subjects
Nonlinear eigenvalue problem ,Two-sided Krylov method ,Nonlinear eigensolver ,Rational Krylov - Abstract
We describe a generalization of the compact rational Krylov (CORK) methods for polynomial and rational eigenvalue problems that usually, but not necessarily, come from polynomial or rational approximations of genuinely nonlinear eigenvalue problems. CORK is a family of one-sided methods that reformulates the polynomial or rational eigenproblem as a generalized eigenvalue problem. By exploiting the Kronecker structure of the associated pencil, it constructs a right Krylov subspace in compact form and thereby avoids the high memory and orthogonalization costs that are usually associated with linearizations of high degree matrix polynomials. CORK approximates eigenvalues and their corresponding right eigenvectors but is not suitable in its current form for the computation of left eigenvectors. Our generalization of the CORK method is based on a class of Kronecker structured pencils that include as special cases the CORK pencils, the transposes of CORK pencils, and the symmetrically structured linearizations by Robol, Vandebril, and Van Dooren [SIAM J. Matrix Anal. Appl., 38 (2017), pp. 188--216]. This class of structured pencils facilitates the development of a general framework for the computation of both right- and left-sided Krylov subspaces in compact form, and hence allows the development of two-sided compact rational Krylov methods for nonlinear eigenvalue problems. The latter are particularly efficient when the standard inner product is replaced by a cheaper to compute quasi-inner product. We show experimentally that convergence results similar to CORK can be obtained for a certain quasi-inner product.
- Published
- 2018
13. A max-plus approach to incomplete Cholesky factorization preconditioners
- Author
-
Hook, James, Scott, Jennifer, Tisseur, Francoise, and Hogg, Jonathan
- Subjects
Hungarian scaling ,sparsity pattern ,Sparse symmetric linear systems ,incomplete factorizations ,max-plus algebra ,Preconditioners - Abstract
We present a new method for constructing incomplete Cholesky factorization preconditioners for use in solving large sparse symmetric positive-definite linear systems. This method uses max-plus algebra to predict the positions of\ud the largest entries in the Cholesky factor and then uses these positions as the sparsity pattern for the preconditioner. Our method builds on the max-plus\ud incomplete LU factorization preconditioner recently proposed in [J. Hook and F. Tisseur, Incomplete LU preconditioner based on max-plus approximation of LU factorization, MIMS Eprint 2016.47, Manchester, 2016] but applied to symmetric positive-definite matrices, which comprise an important special case for the method and its application. An attractive feature of our approach is that the sparsity pattern of each column of the preconditioner can be computed in parallel. Numerical comparisons are made with other incomplete Cholesky factorization preconditioners using problems from a range of practical applications. We demonstrate that the new preconditioner can outperform traditional level-based preconditioners and offer a parallel alternative to a serial limited-memory based approach.
- Published
- 2018
14. Incomplete LU Preconditioner Based on Max-Plus Approximation of LU Factorization
- Author
-
Hook, James and Tisseur, Francoise
- Subjects
incomplete LU factorization ,linear systems of equations ,Hungarian scaling ,preconditioning ,sparse matrices ,LU factorization ,max-plus algebra - Abstract
We present a new method for the a priori approximation of the orders of magnitude of the entries in the LU factors of a complex or real matrix A. This approximation is used to determine the positions of the largest entries in the LU factors of A, and these positions are used as the sparsity pattern for an incomplete LU factorization preconditioner. Our method uses max-plus algebra and is based solely on the moduli of the entries of A. We also present techniques for predicting which permutation matrices will be chosen by Gaussian elimination with partial pivoting. We exploit the strong connection between the field of Puiseux series and the max-plus semiring to prove properties of the max-plus LU factors. Experiments with a set of test matrices from the University of Florida Sparse Matrix Collection show that our max-plus LU preconditioners outperform traditional level of fill methods and have similar performance to those preconditioners computed with more expensive threshold-based methods.
- Published
- 2017
15. Backward error and condition of polynomial eigenvalue problems
- Author
-
Tisseur, Françoise
- Published
- 2000
- Full Text
- View/download PDF
16. Tropical roots as approximations to eigenvalues of matrix polynomials
- Author
-
Noferini, Vanni, Sharify, Meisam, and Tisseur, Francoise
- Published
- 2015
17. STRUCTURED EIGENVALUE CONDITION NUMBERS.
- Author
-
Karow, Michael, Kressner, Daniel, and Tisseur, Francoise
- Subjects
EIGENVALUES ,JORDAN algebras ,LINEAR algebra ,LIE algebras ,AUTOMORPHISMS ,HAMILTONIAN systems - Abstract
This paper investigates the effect of structure-preserving perturbations on the eigenvalues of linearly and nonlinearly structured eigenvalue problems. Particular attention is paid to structures that form Jordan algebras, Lie algebras, and automorphism groups of a scalar product. Bounds and computable expressions for structured eigenvalue condition numbers are derived for these classes of matrices, which include complex symmetric, pseudo-symmetric, persymmetric, skewsymmetric, Hamiltonian, symplectic, and orthogonal matrices. In particular we show that under reasonable assumptions on the scalar product, the structured and unstructured eigenvalue condition numbers are equal for structures in Jordan algebras. For Lie algebras, the effect on the condition number of incorporating structure varies greatly with the structure. We identify Lie algebras for which structure does not affect the eigenvalue condition number. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
18. A CHART OF BACKWARD ERRORS FOR SINGLY AND DOUBLY STRUCTURED EIGENVALUE PROBLEMS.
- Author
-
Tisseur, Francoise
- Subjects
- *
EIGENVALUES , *MATRICES (Mathematics) - Abstract
Presents a chart of backward errors for singly and doubly structure eigenvalue problems. Derivation of formulas that are inexpensive to compute; Hermitian matrix; Symplectic matrix; Eigenvector.
- Published
- 2003
- Full Text
- View/download PDF
19. A PARALLEL DIVIDE AND CONQUER ALGORITHM FOR THE SYMMETRIC EIGENVALUE PROBLEM ON DISTRIBUTED MEMORY ARCHITECTURES.
- Author
-
Tisseur, Francoise and Dongarra, Jack
- Subjects
- *
EIGENVALUES , *PARALLEL algorithms , *SYMMETRIC matrices , *SPECTRAL theory , *HYPERCUBES , *EIGENVECTORS - Abstract
We present a new parallel implementation of a divide and conquer algorithm for computing the spectral decomposition of a symmetric tridiagonal matrix on distributed memory architectures. The implementation we develop differs from other implementations in that we use a two-dimensional block cyclic distribution of the data, we use the Löwner theorem approach to compute orthogonal eigenvectors, and we introduce permutations before the black transformation of each rank-one update in order to make good use of deflation. This algorithm yields the first scalable, portable, and numerically stable parallel divide and conquer eigensolver. Numerical results confirm the effectiveness of our algorithm. We compare performance of the algorithm with that of the QR algorithm and of bisection followed by inverse on an IBM SP2 and a cluster of Pentium PIIs.
- Published
- 1999
- Full Text
- View/download PDF
20. Preface to the 16th ILAS Conference Proceedings, Pisa 2010
- Author
-
Bini, Dario A., Böttcher, Albrecht, Gemignani, Luca, Hogben, Leslie, and Tisseur, Françoise
- Published
- 2013
- Full Text
- View/download PDF
21. Probabilistic rounding error analysis for numerical linear algebra
- Author
-
Connolly, Michael, Higham, Nicholas, and Tisseur, Francoise
- Abstract
This thesis presents new results on probabilistic error analysis. By freeing ourselves of the burden of having to account for every possible worst-case scenario, we provide error bounds that are more informative than the long standing worst-case bounds. We also see in practice how these new bounds can guide the development of new algorithms. We begin by studying stochastic rounding, a rounding mode which rounds a computed result to the next larger or smaller floating-point number randomly. We compare stochastic rounding with round-to-nearest, finding some similarities but also a large number of properties that hold for round-to-nearest but fail to hold for stochastic rounding. Importantly, we show that the rounding errors produced by stochastic rounding are mean zero, mean independent random variables. Using this fact, we build on earlier probabilistic error analysis to show that for a wide range of inner product based linear algebra computations, stochastic rounding provides backward error bounds where we can take the square root of the dimensional constants in the worst-case bounds. This has been a well known rule of thumb for some time, but we show it to be a rule for stochastic rounding. Expanding on this work for inner-product based algorithms, we investigate orthogonal transformations. Using the same model of rounding errors as that which stochastic rounding satisfies, we show that for Householder QR factorization, and other related algorithms, we see the same square rooting of the dimensional constant in the backward error bound. The analysis makes use of a matrix concentration inequality, where previous probabilistic error analyses employed scalar concentration inequalities. We also derive a new backward error formula for QR factorization, used in our numerical experiments which validate our bounds. Finally, we consider the randomized SVD, a well known method for computing low rank matrix approximations. We perform a complete rounding error analysis of the fixed-rank problem, the most straightforward version of the randomized SVD where we have a priori specified a target rank. This is complementary to the existing literature providing error bounds on this procedure in exact arithmetic. While our initial analysis is worst-case, we use our previous probabilistic results to refine these error bounds. Using these refined bounds, we propose a mixed-precision version of the algorithm that offers potential speedups by gradually reducing the precision during the execution of the algorithm.
- Published
- 2023
22. Computing matrix functions in arbitrary precision arithmetic
- Author
-
Liu, Xiaobo, Higham, Nicholas, and Tisseur, Francoise
- Subjects
Matrix cosine ,Matrix square root ,Matrix Mittag--Leffler function ,Multiprecision algorithm ,Multiprecision arithmetic ,Matrix functions ,Schur--Parlett algorithm - Abstract
Functions of matrices play an important role in many applications in science and engineering. Their reliable computation has been a topic of interest in numerical linear algebra over the decades, and a wide variety of methods for computing different functions have been studied. Meanwhile, the interest in multiple precision computing environments has been growing in recent years and nowadays there is an explosion of floating-point arithmetics beyond the most widely used standard IEEE binary32 and binary64 formats. Under such background, there are several works dedicated to the computation of matrix functions in arbitrary precision arithmetic, but overall this research topic has not yet attracted much attention. In this thesis, we study methods for computing functions of matrices in arbitrary precision arithmetic. Unlike many existing algorithms that are tightly coupled to a specific precision of floating-point arithmetic, the algorithms we develop take the unit roundoff of the working precision as an input argument since this is known only at runtime, and so works in an arbitrary precision. First, we provide a version of the Schur--Parlett algorithm that requires only function values and not derivatives. The algorithm requires access to arithmetic of a matrix-dependent precision at least double the working precision, which is used to evaluate the function on the diagonal blocks of order greater than 2 (if there are any) of the reordered and blocked Schur form. The key idea is to compute by diagonalization the function of a small random diagonal perturbation of each diagonal block, where the perturbation ensures that diagonalization will succeed. The algorithm is inspired by Davies's randomized approximate diagonalization method, but we explain why that is not a reliable numerical method for computing matrix functions. This multiprecision Schur--Parlett algorithm is applicable to arbitrary analytic functions f and, like the original Schur--Parlett algorithm, it generally behaves in a numerically stable fashion. The algorithm is especially useful when the derivatives of f are not readily available or accurately computable. We apply our algorithm to the matrix Mittag--Leffler function and show that it yields results of accuracy similar to, and in some cases much greater than, the state-of-the-art algorithm for this function. Second, we develop an algorithm for computing the matrix cosine in arbitrary precision. The algorithm employs a Taylor approximation with scaling and recovering and it can be used with a Schur decomposition or in a decomposition-free manner. We also derive a framework for computing the Frechet derivative, construct an efficient evaluation scheme for computing the cosine and its Frechet derivative simultaneously in arbitrary precision, and show how this scheme can be extended to compute the matrix sine, cosine, and their Frechet derivatives all together. Numerical experiments show that the new algorithms behave in a forward stable way over a wide range of precisions. The transformation-free version of the algorithm for computing the cosine is competitive in accuracy with the state-of-the-art algorithms in double precision and surpasses existing alternatives in both speed and accuracy in working precisions higher than double. Finally, we consider the problem of computing the square root of a perturbation of the scaled identity matrix, A = al + UV*, where U and V are n x k matrices with k ≤ n. This problem arises in various applications, including computer vision and optimization methods for machine learning. We derive a new formula for the pth root of A that involves a weighted sum of powers of the pth root of the k ˆ k matrix aIk ` V °U. This formula is particularly attractive for the square root, since the sum has just one term when p " 2. We derive a new class of Newton iterations for computing the square root that exploit the low-rank structure. Theses methods can be employed in arbitrary precision by simply executing all elementary scalar operations in arbitrary precision, and, additionally, for the iterative algorithms, by properly adjusting the internal tolerance that is used as stopping criterion. We also proposed several Schur-based methods that can utilize the structure of A. In particular, we established a scheme to obtain the Schur decomposition of the n ˆ n matrix UV° from the Schur decomposition of the k ˆ k matrix V °U. We test these new methods on random matrices and on positive definite matrices arising in applications. Numerical experiments show that the new approaches can yield much smaller residual than existing alternatives and can be significantly faster when the perturbation UV° has low rank.
- Published
- 2022
23. Nonlinear eigenvalue problems : theory and algorithms
- Author
-
Negri Porzio, Gian Maria, Higham, Nicholas, and Tisseur, Francoise
- Subjects
Eigenvalue Problems ,Tropical Algebra ,Tropical Roots ,NLEVP ,Numerical Analysis ,Eigenvalue ,Rational Approximations ,Nonlinear eigenvalue problems ,Numerical Linear Algebra ,Contour Integrals ,Eigenpair - Abstract
In this thesis we focus on the theoretical and computational aspects of nonlinear eigenvalue problems (NEPs), which arise in several fields of computational science and engineering, such as fluid dynamics, optics, and structural engineering. In the last twenty years several researchers devoted their time in studying efficient and precise ways to solve NEPs, which cemented their importance in numerical linear algebra. The most successful algorithms developed towards this goal are either based on contour integrals, or on rational approximations and linearizations. The first part of the thesis is dedicated to contour integral algorithms. In this framework, one computes specific integrals of a holomorphic function G(z) over the contour of a target region and exploits results of complex analysis to retrieve the eigenvalues of G(z) inside it. Our main contribution consists in having expanded the theory to include meromorphic functions, i.e., functions with poles inside the target region. We showed that under some circumstances, these algorithms can mistake a pole for an eigenvalue, but these situations are easily recognised and the main results from the holomorphic case can be extended. Furthermore, we proposed several heuristics to automatically choose the parameters that are needed to precisely retrieve the eigenpairs. In the second part of the thesis, we focus on rational approximations. Our goal was developing algorithms that construct robust, i.e., reliable for a given tolerance and scaling independent, rational approximations for functions given in split form or in black-box form. In the first case, we proposed a variant of the set-valued AAA, named weighted AAA, which guarantees to return an approximation with the chosen accuracy. In the second one, we built a two-phase approach, where an initial step performed by the surrogate AAA is followed by a cyclic Leja-Bagby refinement. We concluded the section with numerous numerical experiments based on the NLEVP library, comparing contour integral and rational approximation algorithms. The third and final part of the thesis is about tropical linear algebra. Our research on this topic started has a way to set the parameters of the aforementioned contour integral algorithms: in order to do that, we extended the theory of tropical roots from tropical polynomials to tropical Laurent series. Unlike in the polynomial case, a tropical Laurent series may have infinite tropical roots, but they are still in bijection with the slopes of the associated Newton polygon and they still provide annuli of exclusion for the eigenvalues of the Laurent series.
- Published
- 2021
24. Theory and algorithms for linear eigenvalue problems
- Author
-
Zemaityte, Mante, Higham, Nicholas, and Tisseur, Francoise
- Subjects
515 ,max-plus eigenvalue problems ,orthogonal polynomials ,structural analysis ,symmetric generalized eigenvalue problem ,shift-and-invert Lanczos algorithm ,shifting strategy - Abstract
In the first part of this thesis, methods for the partial solution of generalized eigenvalue problems arising from structural dynamics are studied. A natural choice for partially solving the generalized eigenvalue problem (GEP) is the Lanczos iteration, or the shift-and-invert Lanczos (SIL) algorithm if a large number of eigenpairs is required. When the external loading of the associated structural dynamics problem is specific to that of earthquake engineering, the contribution of an eigenvector of a generalized eigenvalue problem can be quantified by a simple expression. Based on this property, a novel shifting strategy for the SIL algorithm which arises from the theory of orthogonal polynomials is presented. We discuss the Lanczos algorithm and its variant suited for the partial solution of the GEP arising from structural dynamics, followed by the shift-and-invert Lanczos algorithm. Various theoretical aspects and numerical issues of these algorithms are examined and addressed. The Ritz vector and the Lanczos vector methods are also introduced. These methods use vectors other than eigenvectors for the solution of structural dynamics problems. The Ritz vector method is widely used by engineers and is often implemented in engineering software. Orthogonal polynomials and the theory required to devise a new shifting strategy for the SIL algorithm are then introduced. With a specific choice of the starting vector for the Lanczos process it becomes possible to identify the intervals of the spectrum of the associated GEP in which the eigenvalues corresponding to the eigenvectors of interest lie. The shifts for the SIL algorithm are then chosen in the middle of these intervals, and a stopping criterion is provided. The algorithm is termed MASIL. The numerical experiments are performed on real engineering problems with our implementations of SIL, MASIL, Ritz vector, and Lanczos vector methods. While providing a comparable approximation to the solution of the structural dynamics problem, the MASIL approach computes up to 70% fewer eigenvectors and requires fewer shifts, on average, when compared with the standard shifting strategy used for this problem. The Ritz and Lanczos vector methods often provide an accurate approximation to the solution of the structural dynamics problem, but we show that there are cases where these methods do not provide a satisfactory approximation. In the second part of this thesis we explore max-plus algebra. It has been recently observed that an order of magnitude approximations to the absolute values of the eigenvalues of linear and polynomial eigenvalue problems can be obtained by the use of max-plus algebra, when the underlying matrices are unstructured and have large variations between the magnitudes of their entries. We extend the existing algorithms to return only some of the eigenvalues of the standard and generalized max-plus eigenvalue problems, and provide a result which allows for an estimation of the number of eigenvalues in modulus in a given interval. We then present numerical experiments assessing the quality of the max-plus approximations, and find that an order of magnitude approximations to the eigenvalues of symmetric definite eigenvalue problems can also be obtained.
- Published
- 2020
25. Rational Krylov decompositions : theory and applications
- Author
-
Berljafa, Mario, Tisseur, Francoise, and Guettel, Stefan
- Subjects
512 ,rational approximation ,RKFUN ,RKFIT ,RKToolbox ,parallelization ,rational Krylov space ,harmonic Ritz values ,Ritz values ,rational Arnoldi algorithm ,rational Krylov method ,matrix functions - Abstract
Numerical methods based on rational Krylov spaces have become an indispensable tool of scientific computing. In this thesis we study rational Krylov spaces by considering rational Krylov decompositions; matrix relations which, under certain conditions, are associated with these spaces. We investigate the algebraic properties of such decompositions and present an implicit Q theorem for rational Krylov spaces. We derive standard and harmonic Ritz extraction strategies for approximating the eigenpairs of a matrix and for approximating the action of a matrix function onto a vector. While these topics have been considered previously, our approach does not require the last pole to be infinite, which makes the extraction procedure computationally more efficient. Typically, the computationally most expensive component of the rational Arnoldi algorithm for computing a rational Krylov basis is the solution of a large linear system of equations at each iteration. We explore the option of solving several linear systems simultaneously, thus constructing the rational Krylov basis in parallel. If this is not done carefully, the basis being orthogonalized may become poorly conditioned, leading to numerical instabilities in the orthogonalization process. We introduce the new concept of continuation pairs which gives rise to a near-optimal parallelization strategy that allows to control the growth of the condition number of this non orthogonal basis. As a consequence we obtain a more accurate and reliable parallel rational Arnoldi algorithm. The computational benefits are illustrated using our high performance C++ implementation. We develop an iterative algorithm for solving nonlinear rational least squares problems. The difficulty is in finding the poles of a rational function. For this purpose, at each iteration a rational Krylov decomposition is constructed and a modified linear problem is solved in order to relocate the poles to new ones. Our numerical results indicate that the algorithm, called RKFIT, is well suited for model order reduction of linear time invariant dynamical systems and for optimisation problems related to exponential integration. Furthermore, we derive a strategy for the degree reduction of the approximant obtained by RKFIT. The rational function obtained by RKFIT is represented with the aid of a scalar rational Krylov decomposition and an additional coefficient vector. A function represented in this form is called an RKFUN. We develop efficient methods for the evaluation, pole and root finding, and for performing basic arithmetic operations with RKFUNs. Lastly, we discuss RKToolbox, a rational Krylov toolbox for MATLAB, which implements all our algorithms and is freely available from http://rktoolbox.org. RKToolbox also features an extensive guide and a growing number of examples. In particular, most of our numerical experiments are easily reproducible by downloading the toolbox and running the corresponding example files in MATLAB.
- Published
- 2017
26. Numerical linear algebra problems in structural analysis
- Author
-
Kannan, Ramaseshan, Higham, Nicholas, and Tisseur, Francoise
- Subjects
512 ,Numerical Linear Algebra, Sparse Matrix algorithms, Structural Analysis, Numerical conditioning, Parallel programming - Abstract
A range of numerical linear algebra problems that arise in finite element-based structural analysis are considered. These problems were encountered when implementing the finite element method in the software package Oasys GSA. We present novel solutions to these problems in the form of a new method for error detection, algorithms with superior numerical effeciency and algorithms with scalable performance on parallel computers. The solutions and their corresponding software implementations have been integrated into GSA's program code and we present results that demonstrate the use of these implementations by engineers to solve real-world structural analysis problems.
- Published
- 2014
27. Algorithms and theory for polynomial eigenproblems
- Author
-
Taslaman, Leo Viktor, Tisseur, Francoise, and Silvester, David
- Subjects
512.9 - Abstract
In this thesis we develop new theoretical and numerical results for matrix polynomials and polynomial eigenproblems. This includes the cases of standard and generalized eigenproblems. Two chapters concern quadratic eigenproblems $(M\lambda^2+D\lambda+K)x=0$, where $M$, $D$ and $K$ enjoy special properties that are commonly encountered in modal analysis. We discuss this application in some detail, in particular the mathematics behind discrete dampers. We show how the physical intuition of a damper that gets stronger and stronger can be mathematically proved using matrix analysis. We then develop an algorithm for quadratic eigenvalue problems with low rank damping, which outperforms existing algorithm both in terms of speed and accuracy. The first part of our algorithm requires the solution of a generalized eigenproblem with semidefinite coefficient matrices. To solve this problem we develop a new algorithm based on an algorithm proposed by Wang and Zhao [SIAM J. Matrix Anal. Appl. 12-4 (1991), pp. 654--660]. The new algorithm computes all eigenvalues in a backward stable and symmetry preserving manner. The next two chapters are about equivalences of matrix polynomials. We show, for an algebraically closed field $\mathbb{F}$, that any matrix polynomial $P(\lambda)\in\mathbb{F}[\lambda]^{n\times m}$, $n\leq m$, can be reduced to triangular form, while preserving the degree and the finite and infinite elementary divisors. We then show that the same result holds for real matrix polynomials if we replace "triangular" with "quasi-triangular", that is, block-triangular with diagonal blocks of size $1\times 1$ and $2 \times 2$. The proofs are constructive in the sense that we build up triangular and quasi-triangular matrix polynomials starting the Smith form. In this sense we are solving structured inverse problems. In particular, our results imply that the necessary constraints that make list of elementary divisors admissible for a real square matrix polynomial of degree $\ell$ are also sufficient conditions. For the case of matrix polynomials with invertible leading coefficients, we show how triangular/quasi-triangular forms, as well as diagonal and Hessenberg forms, can be computed numerically. Finally, we present a backward error analysis of the shift-and-invert Arnoldi algorithm for matrices. This algorithm is also of interest to polynomial eigenproblems with easily constructible monic linearizations. The analysis shows how errors from the linear system solves and orthonormalization process affect the Arnoldi recurrence. Residual bounds for linear systems and columnwise backward error bounds for QR factorizations come to play, so we discuss these in some detail. The main result is a set of backward error bounds that can be estimated cheaply. We also use our error analysis to define a sensible condition for "breakdown", that is, a condition for when the Arnoldi iteration should be stopped.
- Published
- 2014
28. Analysis of structured polynomial eigenvalue problems
- Author
-
Al-Ammari, Maha Rahma, Tisseur, Francoise, and Powell, Catherine
- Subjects
518 - Abstract
This thesis considers Hermitian/symmetric, alternating and palindromic matrix polynomials which all arise frequently in a variety of applications, such as vibration analysis of dynamical systems and optimal control problems. A classification of Hermitian matrix polynomials whose eigenvalues belong to the extended real line, with each eigenvalue being of definite type, is provided first. We call such polynomials quasidefinite. Definite pencils, definitizable pencils, overdamped quadratics, gyroscopically stabilized quadratics, (quasi)hyperbolic and definite matrix polynomials are all quasidefinite. We show, using homogeneous rotations, special Hermitian linearizations and a new characterization of hyperbolic matrix polynomials, that the main common thread between these many subclasses is the distribution of their eigenvalue types. We also identify, amongst all quasihyperbolic matrix polynomials, those that can be diagonalized by a congruence transformation applied to a Hermitian linearization of the matrix polynomial while maintaining the structure of the linearization. Secondly, we generalize the notion of self-adjoint standard triples associated with Hermitian matrix polynomials in Gohberg, Lancaster and Rodman's theory of matrix polynomials to present spectral decompositions of structured matrix polynomials in terms of standard pairs (X,T), which are either real or complex, plus a parameter matrix S that acquires particular properties depending on the structure under investigation. These decompositions are mainly an extension of the Jordan canonical form for a matrix over the real or complex field so we investigate the important special case of structured Jordan triples. Finally, we use the concept of structured Jordan triples to solve a structured inverse polynomial eigenvalue problem. As a consequence, we can enlarge the collection of nonlinear eigenvalue problems [NLEVP, 2010] by generating quadratic and cubic quasidefinite matrix polynomials in different subclasses from some given spectral data by solving an appropriate inverse eigenvalue problem. For the quadratic case, we employ available algorithms to provide tridiagonal definite matrix polynomials.
- Published
- 2011
29. Algorithms for matrix polynomials and structured matrix problems
- Author
-
Munro, Christopher James and Tisseur, Francoise
- Subjects
512.9 - Published
- 2011
30. Error estimation and stabilization for low order finite elements
- Author
-
Liao, Qifeng, Silvester, David, and Tisseur, Francoise
- Subjects
519 ,finite element approximation, error estimation, stabilized approximation - Published
- 2010
31. Algorithms for Matrix Functions and their Frechet Derivatives and Condition Numbers
- Author
-
Relton, Samuel David, TISSEUR, FRANCOISE FML, Tisseur, Francoise, and Higham, Nicholas
- Subjects
Numerical Analysis ,Matrix Exponential ,Matrix Cosine ,Condition Number ,Backward Error Analysis ,Matrix Sine ,Level-2 Condition Number ,Matrix Function ,Matrix Logarithm ,Matrix Powers ,Derivatives - Abstract
Please see full text for abstract.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.