Balancing Sparse Matrices

Tzu-Yi Chen , James Demmel

What is Balancing?

Balancing a matrix A consists of finding a diagonal similarity transform DAD^{-1} such that for any i, row i and column i have the same norm. Usually done before calculating the eigenvalues of A, balancing can improve the speed and accuracy with which the eigenvalues are computed. To decrease the time taken by the eigensolver, in practice the matrix A is usually permuted before balancing to isolate diagonal blocks.

Balancing can improve the accuracy of computed eigenvalues by sometimes reducing the norm of the matrix. Osborne [2] showed that balancing a matrix in the 2-norm also minimizes the Frobenius norm. Eaves et. al. [1] showed that balancing in the 1-norm minimizes the sum of the elements, if A is non-negative. Code for approximately balancing matrices in the 1-norm can be found in linear algebra packages such as Eispack (under the name balanc), Lapack (under the name gebal), and Matlab (under the name balance).

What is Available Here?

The balancing routines found in the above linear algebra packages are based on the algorithm described by Parlett and Reinsch in [3]. Whereas those routines are written for dense matrices, we implemented two types of algorithms for balancing sparse matrices. There are many problems which require finding the eigenvalues of large, sparse matrices; our code is expected to be useful as a preconditioner for the eigensolvers used.

Our first code is a sparse version of the standard dense code. It takes a matrix, permutes it into its strongly connected components, and balances each component. The C version of the code takes as input a matrix in either column-compressed or permuted-column-compressed format. The code returns a matrix in permuted-column-compressed format together with the permuted diagonal matrix used to scale the original matrix and information about the strongly connected components found. The Matlab code includes a slightly modified version of the C code together with a mex-file interface.

The second is a set of Krylov-based methods. These methods access the matrix to be balanced using only matrix-vector and sometimes matrix-vector multiplications. We have three algorithms, the first uses only matrix-vector multiplication, the other two use both matrix-vector and matrix-transpose-vector multiplications. In each case the algorithm can be described in a few lines. The Matlab code provided below is no more than 20 lines of code per algorithm. The rationale for these algorithms can be found in the document "Balancing Sparse Matrices for Computing Eigenvalues".

Code:

Direct Balancing for Sparse Matrices:
C: spbalance_c.tar.gz
Matlab: spbalance_matlab.tar.gz
Krylov-Based Balancing:
Matlab: KrylovBalanceAz , KrylovBalanceAtz , KrylovBalanceCutoff

Publications:

T.-Y. Chen. Balancing Sparse Matrices for Computing Eigenvalues. < abstract > < gzipped postscipt > UC Berkeley EECS department. Masters report. May 1998.

T.-Y. Chen and J.W. Demmel. Balancing Sparse Matrices for Computing Eigenvalues. < abstract> < gzipped postscipt > Lin. Alg. Appl. 309:261-287. April 2000.

T.-Y. Chen and J.W. Demmel. Balancing Sparse Matrices for Computing Eigenvalues. In Templates for the Solution of Algebraic Eigenvalue Problems: a Practical Guide, Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. Pages 152-157. SIAM, 2000.

Also available are the slides from a talk I gave at the NERSC scientific computing seminar. < gzipped postscript >

References:

[1] B.C. Eaves, A.J. Hoffman, U.G. Rothblum and H. Schneider. Line-sum-symmetric scalings of square nonnegative matrices. In Mathematical Programming Study 25 , pages 124-141. 1985.

[2] E.E. Osborne. On pre-conditioning of matrices. JACM , 7:338-345, 1960.

[3] B. Parlett and C. Reinsch. Balancing a matrix for calculation of eigenvalues and eigenvectors. Numer. Math. 13:293-304, 1969.